Managing DDF
Overview
Distributed Data Framework (DDF) is an agile and modular integration framework. It is primarily focused on data integration, enabling clients to insert, query and transform information from disparate data sources via the DDF Catalog. A Catalog API allows integrators to insert new capabilities at various stages throughout each operation. DDF is designed with the following architectural qualities to benefit integrators. This page provides instructions to install, start, and stop the DDF.
Documentation Guide
Introduction
Overview
This document serves to guide users, administrators and developers through the DDF.
Documentation Updates
The most current Distributed Data Framework DDF) documentation is available at: https://tools.codice.org/wiki/display/DDF/
Conventions
The following conventions are used within this documentation:
|
This is a Tip, used to provide helpful information. |
|
This is an Informational Note, used to emphasize points, remind users of beneficial information, or indicate minor problems in the outcome of an operation. |
|
This is an Emphasized Note, used to inform of important information. |
|
This is a Warning, used to alert users about the possibility of an undesirable outcome or condition. |
Customizable Values
Many values used in descriptions are customizable and should be changed for specific use cases. These values are denoted by < >, and by [[ ]]when within XML syntax. When using a real value, the placeholder characters should be omitted.
Code Values
Java objects, lines of code, or file properties are denoted with the Monospace font style. Example: DDF.catalog.CatalogFramework`
Hyperlinks
Some hyperlinks (e.g., /admin) within the documentation assume a locally running installation of DDF. Simply change the hostname if accessing a remote host.
Questions
Questions about DDF or this documentation should be posted to the DDF-users forum (https://groups.google.com/d/forum DDF-users), DDF-announcements forum (https://groups.google.com/d/forum DDF-announcements), or DDF-developers forum (https://groups.google.com/d/forum DDF-developers), where they will be responded to quickly by a member of the DDF team.
Applications
DDF is comprised of several modular applications, to be installed or uninstalled as needed.
- DDF Administrative Application
-
The administrative application enhances administrative capabilities when installing and managing DDF. It contains various services and interfaces that allow administrators more control over their systems.
- DDF Catalog Application
-
The DDF Catalog provides a framework for storing, searching, processing, and transforming information. Clients typically perform query, create, read, update, and delete (QCRUD) operations against the Catalog. At the core of the Catalog functionality is the Catalog Framework, which routes all requests and responses through the system, invoking additional processing per the system configuration.
- DDF Content Application
-
The DDF Content application provides a framework for storing, reading, processing, transforming and cataloging data.
- DDF Platform Application
-
The Platform application is considered to be a core application of the distribution. The Platform application has fundamental building blocks that the distribution needs to run. These building blocks include subsets of: Karaf (http://karaf.apache.org/), CXF (http://cxf.apache.org/), Cellar (http://karaf.apache.org/index/subprojects/cellar.html), and Camel (http://camel.apache.org/).
Included as part of the Platform application is also a Command Scheduler. The Command Scheduler allows users to schedule Command Line Shell Commands to run at certain specified intervals.
- DDF Security Application
-
The Security application provides authentication, authorization, and auditing services for the DDF. They comprise both a framework that developers and integrators can extend and a reference implementation that meets security requirements. More information about the security framework and how everything works as a single security solution can be found on the Managing Web Service Security page.
- DDF Solr Catalog Application
-
The Solr Catalog Provider (SCP) is an implementation of the
CatalogProviderinterface using Apache Solr (http://lucene.apache.org/solr/) as a data store. - DDF Spatial Application
-
The DDF Spatial Application provides KML transformer and a KML network link endpoint that allows a user to generate a View-based KML Query Results Network Link.
- DDF Standard Search UI
-
The DDF Standard Search UI application allows a user to search for records in the local Catalog (provider) and federated sources. Results of the search are returned in HTML format and are displayed on a globe, providing a visual representation of where the records were found.
Choosing a Guide
The documentation is segmented by user needs, with users categorized as Users, Administrators, Integrators, and Developers.
- Users
-
Users are end users interacting with the applications at the most basic level.
- Administrators
-
Administrators will be installing, maintaining, and supporting existing applications.
- Integrators
-
Integrators will use the existing applications to support their external frameworks.
- Developers
-
Developers will build or extend the functionality of the applications.
Quick Start
Distributed Data Framework (DDF) is an agile and modular integration framework. It is primarily focused on data integration, enabling clients to insert, query and transform information from disparate data sources via the DDF Catalog. A Catalog API allows integrators to insert new capabilities at various stages throughout each operation. DDF is designed with the following architectural qualities to benefit integrators.
Quick Start
This quick tutorial will demonstrate:
-
Installation
-
Catalog Capabilities: Ingest and query using every endpoint
-
Use of the Content Framework
-
Metrics Reporting
Prerequisites
Review Prerequisites to ensure all system prerequisites are met.
Install DDF
-
Install DDF by unzipping the zip file. This will create an installation directory, which is typically created with the name and version of the application. This installation directory will be referred to as
<DISTRIBUTION_INSTALL_DIR>. Substitute the actual directory name in place of this. -
Start DDF by running the
<DISTRIBUTION_INSTALL_DIR>/bin/ddfscript (orddf.baton Windows). -
Verify the distribution is running.
-
Go to https://localhost:8993/admin.
-
Enter the default username of "admin" (no quotes) and the password of "admin" (no quotes).
-
Follow the install instructions for more extensive install guidance, or use the command line console (which appears after the <DISTRIBUTION_INSTALL_DIR>/bin/ddf script starts) to install a few applications as mentioned below.
app:start catalog-app app:start content-app app:start solr-app
Other applications may be installed at a later time.
-
After the installation has been configured the instance should be restarted.
-
Go to https://localhost:8993/services and verify five REST services are available: admin, application, metrics, catalog, and catalog/query.
-
Click on the links to each REST service’s WADL to see its interface.
-
In the Admin Console (at /admin), configure the system settings.
-
Enter the username of "admin" (no quotes) and the password "admin" (no quotes).
-
Select Platform app
-
Select Platform Global Configuration.
-
Enter the port and host where the distribution is running.
-
Catalog Capabilities
-
Create an entry in the Catalog by ingesting a valid GeoJson file (attached to this page). This ingest can be performed using:
-
A REST client, such as Google Chrome’s Advanced REST Client. OR
-
Using the following curl command to POST to the Catalog REST CRUD endpoint.
Windows Examplecurl.exe -H "Content-type: application/json;id=geojson" -i -X POST -d @"C:\path\to\geojson_valid.json" https://localhost:8181/services/catalog
*NIX Examplecurl -H "Content-type: application/json;id=geojson" -i -X POST -d @geojson_valid.json https://localhost:8993/services/catalog
Where: -H adds an HTTP header. In this case, Content-type header
application/json;id=geojsonis added to match the data being sent in the request. -i requests that HTTP headers are displayed in the response. -X specifies the type of HTTP operation. For this example, it is necessary to POST (ingest) data to the server. -d specifies the data sent in the POST request. The @ character is necessary to specify that the data is a file.The last parameter is the URL of the server that will receive the data.
This should return a response similar to the following (the actual catalog ID in the id and Location URL fields will be different):
Sample Response1 2 3 4 5 6
HTTP/1.1 201 Created Content-Length: 0 Date: Mon, 22 Apr 2013 22:02:22 GMT id: 44dc84da101c4f9d9f751e38d9c4d97b Location: https://localhost:8993/services/catalog/44dc84da101c4f9d9f751e38d9c4d97b Server: Jetty(7.5.4.v20111024)
-
-
Verify the entry was successfully ingested by entering in a browser the URL returned in the POST response’s HTTP header. For instance in our example, it was
/services/catalog/44dc84da101c4f9d9f751e38d9c4d97b. This should display the catalog entry in XML within the browser. -
Verify the catalog entry exists by executing a query via the OpenSearch endpoint.
-
Enter the following URL in a browser /services/catalog/query?q=ddf. A single result, in Atom format, should be returned.
Use of the Content Framework
Using the Content framework’s directory monitor, ingest a file so that it is stored in the content repository with a metacard created and inserted into the Catalog.
-
In the Web Console, select the Configuration tab.
-
Select the Content Directory Monitor.
-
Set the directory path to inbox.
-
Click the Save button.
-
Copy the attached geojson file to the
<DISTRIBUTION_INSTALL_DIR>/inboxdirectory.The Content Framework will:
-
ingest the file,
-
store it in the content repository at <DISTRIBUTION_INSTALL_DIR>/content/store/<GUID>/geojson_valid.json,
-
look up the GeoJson Input Transformer based on the mime type of the ingested file,
-
create a metacard based on the metadata parsed from the ingested GeoJson file, and
-
insert the metacard into the Catalog using the CatalogFramework.
Note that XML metadata for text searching is not automatically generated from GeoJson fields.
-
-
Verify GeoJson file was stored using the Content REST endpoint.
-
Install the feature content-rest-endpoint using the Features tab in the Web Console.
-
Send a GET command to read the content from the content repository using the Content REST endpoint. This can be done using curl command below. Note that the GUID will be different for each ingest. The GUID can be determined by going to the <DISTRIBUTION_INSTALL_DIR>/content/store directory and copying the sub-directory in this folder (there should only be one).
-
curl -X GET https://localhost:8993/services/content/c90147bf86294d46a9d35ebbd44992c5
The response to the GET command will be the contents of the geojson_valid.json file originally ingested.
Metrics Reporting
Complete the following procedure now that several queries have been executed. . Open the Web Console (/system/console/metrics). . Select the PNG link for Catalog Queries under the column labeled 1h (one hour). A graph of the catalog queries that were performed in the last hour is displayed. . Select the browser’s back button to return to the Metrics tab. . Select the XLS link for Catalog Queries under the column labeled 1d (one day).
|
Handy Tip
Based on the browser’s configuration, the .xls file will be downloaded or automatically displayed in Excel. |
Using DDF
Version 2.8.2. Copyright (c) Codice Foundation
:imagesdir: target/docs/images
:toc: left
:branding: DDF
:icons: font
:example-caption!:
:source-highlighter: coderay
:data-uri:
:title-logo: logo_ddf.png
:last-update-label: Copyright (c) Codice Foundation.
This work is licensed under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0).
This page last updated:
Overview
Distributed Data Framework (DDF) is an agile and modular integration framework. It is primarily focused on data integration, enabling clients to insert, query and transform information from disparate data sources via the DDF Catalog. A Catalog API allows integrators to insert new capabilities at various stages throughout each operation. DDF is designed with the following architectural qualities to benefit integrators.
Understanding Metadata and Metacards
Metadata is information about a resource, organized into a schema to make it possible to search against. The DDF Catalog stores this metadata and allows access to it. If desired, the DDF Content application can be installed to store the resources themselves. Metacards are single instances of metadata, representing a single record, in the Metadata Catalog (MDC). Metacards follow one of several schemas to ensure reliable, accurate, and complee metadata. Essentially, Metacards function as containers of metadata.
Populating Metacards (during ingest)
Upon ingest, a metacard transformer will read the data from the ingested file and populate the fields of the metacard. Exactly how this is accomplished depends on the origin of the data, but most fields (except id) are imported directly.
Searching Metadata
DDF provides the capability to search the Metadata Catalog (MDC) for metadata. There are a number of different types of searches that can be performed on the MDC, and these searches are accessed using one of several interfaces. This section provides are very high level overview of introductory concepts of searching with DDF. These concepts are expanded upon in later sections.
Search Types
There are four basic types of metadata search. Additionally, any of the types can be combined to create a compound search.
Contextual Search
A contextual search is used when searching for textual information. It is similar to a Google search over the metadata contained in the MDC. Contextual searches may use wildcards, logical operators, and approximate matches.
Spatial Search
A spatial search is used for Area of Interest (AOI) searches. Polygon and point radius searches are supported. Specifically, the spatial search looks at the metacards' location attribute and coordinates are specified in WGS 84 decimal degrees.
Temporal Search
A temporal search finds information from a specific time range. Two types of temporal searches are supported, relative and absolute. Relative searches contain an offset from the current time, while absolute searches contain a start and an end timestamp. Temporal searches can look at the effective date attribute or the modified date.
Datatype
A datatype search is used to search for metadata based on the datatype, and optional versions. Wildcards (*) can be used in both the datatype and version fields. Metadata that matches any of the datatypes (and associated versions if specified) will be returned. If a version is not specified, then all metadata records for the specified datatype(s) regardless of version will be returned.
Compound Search
These search types may be combined to create Compound searches. For example, a Contextual and Spatial search could be combined into one Compound search to search for certain text in metadata in a particular region of the world.
Search Interfaces
DDF Search UI Application
The DDF Search UI application provides a graphic interface to return results in HTML format and locate them on an interactive globe or map. For more details on using this application, go to DDF Search UI User’s Guide.
SSH
Additionally, it is possible to use a client script to remotely access DDF via SSH and send console commands to search and ingest data.
Catalog Search Result Objects
Data is returned from searches as Catalog Search Result objects. This is a subtype of Catalog Entry that also contains additional data based on what type of sort policy was applied to the search. Because it is a subtype of Catalog Entry, a Catalog Search Result has all Catalog Entry’s fields such as metadata, effective time, and modified time. It also contains some of the following fields, depending on type of search, that are populated by DDF when the search occurs:
-
Distance: Populated when a point radius spatial search occurs. Numerical value that indicates the result’s distance from the center point of the search.
-
Units: Populated when a point radius spatial search occurs. Indicates the units (kilometer, mile, etc.) for the distance field.
-
Relevance: Populated when a contextual search occurs. Numerical value that indicates how relevant the text in the result is to the text originally searched for.
Search Programmatic Flow
Searching the catalog involves three basic steps:
-
Define the search criteria (contextual, spatial, temporal, or compound – a combination of two or more types of searches).
-
Optionally define a sort policy and assign it to the criteria.
-
For contextual search, optionally set the
fuzzyflag totrueorfalse(the default value for theMetadata Catalogfuzzyflag istrue, while theportaldefault value isfalse). -
For contextual search, optionally set the caseSensitive flag to true (the default is that caseSensitive flag is NOT set and queries are not case sensitive). Doing so enables case sensitive matching on the search criteria. For example, if caseSensitive is set to true and the phrase is “Baghdad” then only metadata containing “Baghdad” with the same matching case will be returned. Words such as “baghdad”, “BAGHDAD”, and “baghDad” will not be returned because they do not match the exact case of the search term.
-
-
Issue a search
-
Examine the results
These steps are performed in the same basic order but using different classes depending on whether the Web services or Search UI interfaces are used.
Sort Policies
Searches can also be sorted according to various built-in policies. A sort policy is applied to the search criteria after its creation but before the search is issued. The policy specifies to the DDF the order the MDC search results should be in when they are returned to the requesting client. Only one sort policy may be defined per search. There are three policies available.
| Sort Policy | Sorts By | Default Order | Available for |
|---|---|---|---|
Temporal |
The catalog search result’s effective time field |
Newest to oldest |
All Search Types |
Distance |
The catalog search result’s distance field |
Nearest to farthest |
Point-Radius Spatial searches |
Relevance |
The catalog search result’s relevance field |
Most to least relevant |
Contextual |
If no sort policy is defined for a particular search, the temporal policy will automatically be applied.
|
For Compound searches, the parent Compound search’s sort policy is used. For example, if a Spatial search and Contextual search are the components of a Compound search, the Spatial search might have a distance policy and the Contextual search might have a relevance policy. The parent Compound search, though, does not use the policy of its child objects to define its sorting approach. The Compound search itself has its own temporal sort policy field that it will use to order the results of the search. |
Prerequisites
-
Supported platforms are *NIX - Unix/Linux/OSX, Solaris, and Windows.
-
JDK8 must be installed (http://www.oracle.com/technetwork/java/javase/downloads/index.html).
-
The JAVA_HOME environment variable must be set to the location where the JDK is installed.
JAVA_HOME=/usr/java/jdk1.8.0 export JAVA_HOME
set JAVA_HOME=C:\Program Files\Java\jdk1.8.0
|
*NIX
Unlink |
|
Verify that the JAVA_HOME was set correctly. *NIX
echo $JAVA_HOME Windows
echo %JAVA_HOME% |
-
DDF installation zip file.
-
A web browser.
-
For Linux systems, increase the file descriptor limit by editing
/etc/sysctl.confto include or change the following:
fs.file-max = 6815744
|
Restart
For the change to take effect, a restart is required. |
init 6
|
The Administration Web Console is not compatible with Internet Explorer. |
Installing
|
The *NIX commands listed next to the steps were performed on a default installation of Red Hat Enterprise Linux 5.4. Permission maintenance is not mentioned in this article, but all files should be owned by the running user, regardless of platform. |
|
DDF Installation Types
DDF can be installed using a single distribution zip file that contains all of the DDF applications already installed, OR A custom DDF installation can be performed by using the DDF kernel distribution zip file then hot deploying the desired DDF apps into the running DDF kernel’s <INSTALL_DIRECTORY>/deploy directory. |
|
Although DDF can be installed by any user, it is recommended for security reasons to have a non-root user execute the DDF installation. |
Use the DDF Distribution Zip to Install
-
After the prerequisites have been met, as a root user if for *NIX, change the current directory to the desired install location. This will be referred to as <INSTALL_DIRECTORY>.
|
*NIX Tip
It is recommended that the root user create a new install directory that can be owned by a non-root user (e.g., ddf-user). The non-root user (e.g., ddf-user) can now be used for the remaining installation instructions.
|
mkdir new_installation chown ddf-user:ddf-group new_installation su - ddf-user
-
Change the current directory to location of zip file (ddf-X.Y.zip).
cd /home/user/cdrom
cd D:\
-
Copy ddf-X.Y.zip to <INSTALL_DIRECTORY>.
cp ddf-X.Y.zip <INSTALL_DIRECTORY>
copy ddf-X.Y.zip <INSTALL_DIRECTORY>
-
Change the current directory to the desired install location.
cd <INSTALL_DIRECTORY>
-
The DDF zip is now located within the
<INSTALL_DIRECTORY>. Unzip ddf-X.Y.zip.
unzip ddf-X.Y.zip
"C:\Program Files\Java\jdk1.8.0\bin\jar.exe" xf ddf-X.Y.zip
-
Run DDF using the appropriate script.
<INSTALL_DIRECTORY>/ddf-X.Y/bin/ddf
<INSTALL_DIRECTORY>/ddf-X.Y/bin/ddf.bat
-
Wait for the console prompt to appear.
ddf@local>
-
The distribution takes a few moments to load depending on the hardware configuration. Execute the following command at the command line for status:
ddf@local>list
-
Proceed to Configuration.
(Option 1/Preferred Method) Continue Setup and Installation Using the Installer Module of the Admin UI
Installer Module.
(Option 2/Part 1) Custom Installation Using the DDF Kernel Distribution Zip
-
After the prerequisites have been met, as a root user if for *NIX, change the current directory to the desired install location. This will be referred to as
<INSTALL_DIRECTORY>.
|
It is recommended that the root user create a new install directory that can be owned by a non-root user (e.g., ddf-user). The non-root user (e.g., ddf-user) can now be used for the remaining installation instructions. |
mkdir new_installation chown ddf-user:ddf-group new_installation su - ddf-user
-
Change the current directory to the location of zip file (ddf-kernel-X.Y.zip).
cd /home/user/cdrom
cd D:\
-
Copy ddf-kernel-X.Y.zip to <INSTALL_DIRECTORY>.
cp ddf-kernel-X.Y.zip <INSTALL_DIRECTORY>
copy ddf-kernel-X.Y.zip <INSTALL_DIRECTORY>
-
Change the current directory to the desired install location.
cd <INSTALL_DIRECTORY>
-
The DDF kernel zip is now located within the
<INSTALL_DIRECTORY>. Unzipddf-kernel-X.Y.zip.
unzip ddf-kernel-X.Y.zip
"C:\Program Files\Java\jdk1.8.0\bin\jar.exe" xf ddf-kernel-X.Y.zip
-
Configure global properties in <INSTALL_DIRECTORY>/etc/system.properties .system.properties
org.codice.ddf.system.protocol=https:// org.codice.ddf.system.hostname=localhost org.codice.ddf.system.httpsPort=8993 org.codice.ddf.system.httpPort=8181 org.codice.ddf.system.port=8993 org.codice.ddf.system.rootContext=/services # Set the system information properties org.codice.ddf.system.siteName=ddf.distribution org.codice.ddf.system.siteContact= org.codice.ddf.system.version=<latest version> org.codice.ddf.system.organization=Codice Foundation
-
If the DDF Standalone Solr Server will be installed later, an additional configuration step is required for the DDF kernel. Add the following lines to the bottom of the
<INSTALL_DIR>/etc/org.ops4j.pax.web.cfgfile:
# Jetty Configuration
org.ops4j.pax.web.config.file={karaf.home}/etc/jetty.xml
-
Run the DDF kernel using the appropriate script.
<INSTALL_DIRECTORY>/ddf-kernel-X.Y/bin/ddf
<INSTALL_DIRECTORY>/ddf-kernel-X.Y/bin/ddf.bat
-
Wait for the console prompt to appear. .Command Prompt when Initially Loaded
ddf@local>
The distribution takes a few moments to load depending on the hardware configuration. Execute the following command at the command line for status:
ddf@local>list
The list of bundles should look similar to this:
ddf@local>list
START LEVEL 100 , List Threshold: 50
ID State Blueprint Spring Level Name
[ 111] [Active ] [ ] [ ] [ 80] Commons IO (2.1.0)
[ 112] [Resolved ] [ ] [ ] [ 80] {branding} :: Distribution :: Web Console (2.3.0)
Hosts: 76
[ 113] [Active ] [Created ] [ ] [ 80] {branding} :: Distribution :: Console Branding Plugin (2.3.0)
-
Verify that the following DDF kernel’s features are installed by executing the
features:listcommand and filtering for kernel distribution features:
ddf@local>features:list | grep kernel
[uninstalled] [2.3.1-SNAPSHOT ] custom-karaf-bundles kernel-2.3.1-SNAPSHOT Customized KARAF Bundles
[installed ] [2.3.1-SNAPSHOT ] kernel-webconsolebranding kernel-2.3.1-SNAPSHOT {branding} Web Admin Console branding
|
DDF Application Installation Dependencies
Please read the installation instructions carefully for each DDF application because some of the applications depend on other DDF applications having been previously installed. The order the DDF applications are listed below is the recommended order of installation. Each application must be active before installing the next application is installed. The app:list console command can be used to verify the state of each app as it is installed. |
-
Install the Platform application by following the Platform Application Installation instructions.
-
Install any optional applications that may be needed for the desired configuration.
-
Catalog Application Installation instructions
-
Security Application Installation instructions
-
Content Application Installation instructions
-
Spatial Application Installation instructions
-
Solr Catalog Application Installation instructions are included in each of the Solr Catalog configurations. Refer to the appropriate section for the desired Solr Catalog Provider configuration’s installation instructions.
-
Embedded Solr Catalog Provider
-
External Solr Catalog Provider
-
Standalone Solr Server
-
-
-
Proceed to Configuration.
(Option 2/Part 2) Configuration
-
Open a compatible web browser and log in to the Administrator Console (https://localhost:8993/admin) with username "admin" and password "admin" (no quotes).
-
Select the Configuration tab in the Administrator Console.
-
Select Catalog Sorted Federation Strategy.
-
In the Maximum start index field, enter the maximum query offset number or keep the default setting. Refer to Standard Catalog Framework for additional information.
-
Select the Save button.
-
-
Verification
At this point, DDF should be configured and running with a Solr Catalog Provider. New features (endpoints, services, and sites) can be added as needed.
Verification is achieved by checking that all of the DDF bundles are in an Active state (excluding fragment bundles which remain in a Resolved state).
The following command displays the status of all the DDF bundles:
ddf@local>list | grep -i ddf
|
[ 261] [Resolved ] [ ] [ ] [ 80] {branding} :: Distribution :: Web Console (2.2.0)
Hosts: 76
|
For a complete list of installed features/bundles see the DDF Included Features document. |
DDF Directory Contents after Installation
During DDF installation, the major directories shown in the table below are created, modified, or replaced in the destination directory.
Directory Name |
Description |
bin |
Scripts to start and stop DDF |
data |
The working directory of the system – installed bundles and their data |
data/log/ddf.log |
Log file for DDF, logging all errors, warnings, and (optionally) debug statements. This log rolls up to 10 times, frequency based on a configurable setting (default=1 MB) |
deploy |
Hot-deploy directory – KARs and bundles added to this directory will be hot-deployed (Empty upon DDF installation) |
docs |
The DDF Catalog API Javadoc |
etc |
Directory monitored for addition/modification/deletion of third party .cfg configuration files |
etc/ddf |
Directory monitored for addition/modification/deletion of DDF-related .cfg configuration files (e.g., Schematron configuration file) |
etc/templates |
Template .cfg files for use in configuring DDF sources, settings, etc., by copying to the etc/ddf directory. |
lib |
The system’s bootstrap libraries. Includes the ddf-branding.jar file which is used to brand the system console with the DDF logo. |
licenses |
Licensing information related to the system |
system |
Local bundle repository. Contains all of the JARs required by DDF, including third-party JARs. |
Configuring
DDF can be configured in several ways, depending on need.
Configuring via the Admin Console
Accessing the Admin Console
Open the admin portal. * https://localhost:8993/admin * Enter Username and Password.
|
The default username/password in admin/admin. To change this, refer to Password Management page. |
Initial Configuration
The first time the DDF administrator portal runs, the initial configuration steps appear. Click Start to begin.
On the next screen, general configuration settings such as host address, port and site name can all be configured.
Next, choose between a standard installation and a full installation. Individual applications can be added, removed or deactivated later
The final step of initial configuration is a display of all applications installed and their current status. This tree structure demonstrates how several applications depend on other applications.
|
Platform App, Admin App, and Security Services App CANNOT be selected or unselected as it is installed by default and can cause errors if removed. **Security Services App appears to be unselected upon first view of the tree structure, but it is in fact automatically installed with a later part of the installation process. |
Viewing Currently Active Applications
Tile View
The first view presented is the Tile View, displaying all active applications as individual tiles.
List View
Optionally, active applications can be displayed in a list format by clicking the list view button.
Either view has an > arrow to view more information about the application as currently configured.
Configuration
The Configuration tab lists all bundles associated with the application as links to configure any configurable properties of that bundle.
Details
The Details tab gives a description, version, status, and list of other applications that either required for , or rely on, the current application.
Features
The features tab breaks down the individual features of the application that can be installed or uninstalled as configurable features.
Managing Applications
The Manage button enables activation/deactivation and adding/removing applications.
Activating / Deactivating Applications
The Deactivate button stops individual applications and any dependent apps. Certain applications are central to overall functionality and cannot be deactivated. These will have the Deactivate button disabled. Disabled apps will be moved to a list at the bottom of the page, with an enable button to reactivate, if desired.
|
Deactivating the platform-app, admin-app, and security-services-app will cause errors within the system, so the capabilities to do so have been DISABLED. |
Adding Applications
The Add Application button is at the end of the list of currently active applications.
Removing Applications
To remove an application, it must first be deactivated. This enables the Remove Application button.
Upgrading Applications
Each application tile includes an upgrade but to select a new version to install.
System Settings Tab
The configuration and features installed can be viewed and edited from the System tab as well, however, it is recommended that configuration be managed from the applications tab.
|
In general, applications should be managed via the applications tab. Configuration via this page could result in an unstable system. Proceed with caution! |
Configuring DDF
Configure DDF Global Settings
Global configuration settings are configured via the properties file system.properties. These properties can be manually set by editing this file or set via the installer.
Configurable Properties
Title |
Property |
Type |
Description |
Default Value |
Required |
Protocol |
org.codice.ddf.system.protocol |
String |
Default protocol that should be used to connect to this machine. |
https:// |
yes |
Host |
org.codice.ddf.system.hostname |
String |
The hostname or IP address used to advertise the system. Do not enter localhost. Possibilities include the address of a single node or that of a load balancer in a multi-node deployment. NOTE: Does not change the address the system runs on. |
localhost |
yes |
Default Port |
org.codice.ddf.system.port |
String |
The default port used to advertise the system. This should match either the http or https port. NOTE: Does not change the port the system runs on. |
8993 |
yes |
HTTP Port |
org.codice.ddf.system.httpPort |
String |
The http port used by the system. NOTE: This DOES change the port the system runs on. |
8181 |
yes |
HTTPS Port |
org.codice.ddf.system.httpsPort |
String |
The https port used by the system. NOTE: This DOES change the port the system runs on. |
8993 |
yes |
Root Context |
org.codice.ddf.system.rootContext |
String |
The the base or root context that services will be made available under. |
/services |
yes //// |
Trust Store |
trustStore |
String |
The trust store used for outgoing SSL connections. Path is relative to ddf.home. |
etc/keystores/clientTruststore.jks |
yes |
Trust Store Password |
trustStorePassword |
String |
The password associated with the trust store. |
changeit (encrypted) |
yes |
Key Store |
keyStore |
String |
The key store used for outgoing SSL connections. Path is relative to karaf.home. |
etc/keystores/clientKeystore.jks |
yes |
Key Store Password |
keyStorePassword |
String |
The password associated with the key store. |
changeit (encrypted) |
yes //// |
Site Name |
id |
String |
The site name for DDF. |
ddf.distribution |
yes |
Version |
version |
String |
The version of DDF that is running. This value should not be changed from the factory default. |
2.3.0 |
yes |
Organization |
organization |
String |
The organization responsible for this installation of DDF. |
Codice Foundation |
yes |
Site Contact |
site contact |
String |
The email address of the site contact. |
no |
Manage Features
DDF includes many components, packaged as features, that can be installed and/or uninstalled without restarting the system. Features are collections of OSGi bundles, configuration data, and/or other features. For more information on the features that come with DDF, including a list of the ones included, consult the DDF Included Features page in the Software Version Description Document (SVDD).
|
Transitive Dependencies
Features may have dependencies on other features and will auto-install them as needed. |
Install Features Using the Admin Console
-
Open the admin console.
-
Enter Username and Password.
-
Select the appropriate application.
-
Select the Features tab.
-
Select the play arrow under the Actions column for the feature that should be installed.
-
Wait for the Status to change from Uninstalled to Installed.
Uninstall Features
-
Open the admin console.
-
Enter Username and Password.
-
Select the appropriate application.
-
Select the stop icon under the Actions column for the feature that should be uninstalled.
-
Wait for the Status to change from Installed to Uninstalled.
Add Feature Repositories
-
Open the web administration console.
-
Enter Username and Password
-
Select the Features tab.
-
Enter the URL of the feature repository (see below) to be added.
-
Select the Add URL button.
-
The new feature repository is added to the list of Feature Repositories above the URL field.
There are several ways a new feature repository can be discovered based on the URL entered:
-
The URL can contain the fully qualified path to the feature repository, e.g., mvn:https://tools.codice.org/artifacts/content/groups/public/ddf-standard/2.2.0/xml/features. This URL can include the username and password credentials if necessary. e.g., mvn:https://user/password@tools.codice.org/artifacts/content/groups/public/ddf-standard/2.2.0/xml/features. Note that the password will be in clear text.
-
The URL can only be the mvn URL, e.g.,
mvn:ddf.features/ddf-standard/2.2/xml/features ddf-standard/2.2.0/xml/features. The feature repositories configured for DDF will be searched.
Repositories to be searched by DDF can be configured in one of several ways:
-
Set the org.ops4j.pax.url.mvn.repositories property in the
<DDF_INSTALL_DIR>/etc/org.ops4j.pax.url.mvn.cfgfile (most common). -
Set the org.ops4j.pax.url.mvn.settings property in the
<DDF_INSTALL_DIR>/etc/org.ops4j.pax.url.mvn.cfgfile to the mavensettings.xmlfile that specifies the repository(ies) to be searched. Asettings.xmltemplate file for this configuration is provided in<DDF_INSTALL_DIR>/etc/templates/settings.xml -
Create a maven
settings.xmlfile in the<USER_HOME_DIR>/.m2directory of the user running DDF that specifies the repository(ies) to be searched (this method is typical for a developer). -
The simplest approach is to specify the fully qualified URL to the feature repository.
Known Issues
Blank Web Console
DDF uses Pax Web as part of its HTTP support. Modifying the Pax Web runtime configuration in the web console may cause the web console to freeze.
Solution
Use the configuration instructions according to the hardening instructions. Additional Information For more information on the Web Console refer to http://felix.apache.org/site/apache-felix-web-console.html
Additional Information
For more information on the Web Console refer to http://felix.apache.org/site/apache-felix-web-console.html
Configuring DDF Using the System Console
Follow these steps to configure DDF using the system console.
|
System Console instructions are provided in the Console Commands section. |
Manage Features
DDF includes many components, packaged as features, that can be installed and/or uninstalled without restarting the system. Features are collections of OSGi bundles, configuration data, and/or other features. For more information on the features that come with DDF, including a list of the ones included, consult the DDF Included Features page in the Software Version Description Document (SVDD).
|
Transitive Dependencies
Features may have dependencies on other features and will auto-install them as needed. |
Install Features
-
Determine which feature to install by viewing the available features on DDF.
ddf@local>features:list -
The console outputs a list of all features available (installed and uninstalled). A snippet of the list output is shown below (the versions may differ based on the version of DDFbeing run):
State Version Name Repository Description [installed ] [2.0.1 ] ddf-core ddf-2.1.0 [uninstalled] [2.0.1 ] ddf-sts ddf-2.1.0 [installed ] [2.0.1 ] ddf-security-common ddf-2.1.0 [installed ] [2.0.1 ] ddf-resource-impl ddf-2.1.0 [uninstalled] [2.0.1 ] ddf-source-dummy ddf-2.1.0
-
Install the desired feature.
ddf@local>features:install ddf-source-dummy -
Check the feature list to verify the feature was installed.
ddf@local>features:list
State Version Name Repository Description [installed ] [2.0.1 ] ddf-core ddf-2.1.0 [uninstalled] [2.0.1 ] ddf-sts ddf-2.1.0 [installed ] [2.0.1 ] ddf-security-common ddf-2.1.0 [installed ] [2.0.1 ] ddf-resource-impl ddf-2.1.0 [installed ] [2.0.1 ] ddf-source-dummy ddf-2.1.0
-
Check the bundle status to verify the service is started.
ddf@local>list
The console output should show an entry similar to the following:
[ 117] [Active ] [ ] [Started] [ 75] {branding} :: Catalog :: Source :: Dummy (<version>)
Uninstall Features
-
Check the feature list to verify the feature is installed properly.
ddf@local>features:list
State Version Name Repository Description [installed ] [2.0.1 ] ddf-core ddf-2.1.0 [uninstalled] [2.0.1 ] ddf-sts ddf-2.1.0 [installed ] [2.0.1 ] ddf-security-common ddf-2.1.0 [installed ] [2.0.1 ] ddf-resource-impl ddf-2.1.0 [installed ] [2.0.1 ] ddf-source-dummy ddf-2.1.0
-
Uninstall the feature.
ddf@local>features:uninstall ddf-source-dummy
|
Dependencies that were auto-installed by the feature are not automatically uninstalled. |
-
Verify that the feature has uninstalled properly.
ddf@local>features:list
State Version Name Repository Description [installed ] [2.0.1 ] ddf-core ddf-2.1.0 [uninstalled] [2.0.1 ] ddf-sts ddf-2.1.0 [installed ] [2.0.1 ] ddf-security-common ddf-2.1.0 [installed ] [2.0.1 ] ddf-resource-impl ddf-2.1.0 [uninstalled] [2.0.1 ] ddf-source-dummy ddf-2.1.0
Configuring DDF using Configuration (.cfg) files
The DDF can also be configured with configuration (.cfg) files.
HTTP Port Configuration
|
Do not use the Web Administration Console to change the HTTP port. While the Web Administration Console’s Pax Web Runtime offers this configuration option, it has proven to be unreliable and may crash the system. |
Multiple Local DDF Nodes
Edit the port numbers in the files in the DDF install folder. The line numbers relate to 2.1.X releases.
| File to Edit | Line Number | Original Value | Example of New Value |
|---|---|---|---|
bin/karaf.bat |
99 |
5005 |
i.e. 5006 |
etc/org.apache.karaf.management.cfg |
27 |
1099 |
i.e. 1199 |
" " |
32 |
44444 |
i.e. 44445 |
etc/org.ops4j.pax.web.cfg |
9 |
8181 |
i.e. 8281 |
" " |
22 |
8993 |
i.e. 8994 |
|
Be sure to note the port number that replaced 8181 then enter that number in the Web Console under the Configuration tab for the Platform Global Configuration → DDF Port entry. Also edit the sitename so that there are no duplicates on your local machine. |
|
Only root can access ports<1024 on Unix systems. For suggested ways to run DDF with ports < 1024 see How do I use port 80 as a non-root user?. |
Enable and Configure HTTP to HTTPS Proxy
What it does: This feature proxies http to https.
When to use: Use this feature when you have legacy clients that can’t use HTTPS.
How to install this feature using the DDF terminal:
Note: If DDF has not been installed, use the “How to install this feature using the AdminUI” guide found below
1.) Launch the DDF application.
2.) In the DDF console, type the command “features:install platform-http-proxy”
How to install this feature using the AdminUI:
1.) Navigate your browser to https://localhost:8993/admin/index.html. The admin console appears.
1.a) If this is not a new install, skip to step 9
2.) At the admin console, click on “Start” to begin the setup process. The general settings page will appear.
3.) Configure the general settings by entering in values for the “Protocol”, “Host”, “Port”, “Site Name”, and “Organization.” Descriptions for these settings can be found on the “Configure general settings” page.
4.) Click “Next” and the “Setup Types” page appears.
5.) Choose whichever setup type that corresponds to the suite of applications you would like installed.
6.) [Optional] Click “Customize” and a window appears allowing you to customize which features will be included in your installation.
7.) Click “Next” and the installation begins
8.) Once the installation is finished, click “Finish” to conclude the setup and the page will refresh.
9.) On the left hand side of the screen, select the “Applications” tab.
10.) Under “Active Applications” choose the “>” arrow under “DDF Platform”. This will bring up the configuration page for the DDF Platform application. This arrow is shown here circled in red:
11.) Under the “DDF Platform” heading, choose the “Features” tab. The features tab shows us all the features we can install to the currently selected app.
12.) Scroll through the list of features and find “platform-http-proxy.” The feature will be listed as “Uninstalled” click on the arrow button to the right of the word “Uninstalled” in order to install the platform-http-proxy. This arrow button is shown here circled in red:
Configuring the proxy:
Note: The hostname should be set by default. Only configure the proxy if this is not working.
1.) Navigate your browser to https://localhost:8993/admin/index.html. The admin console appears.
2.) On the left hand side of the screen, select the “Applications” tab.
3.) Under “Active Applications” choose the “>” arrow under “DDF Platform”. This will bring up the configuration page for the DDF Platform application. This arrow is shown here circled in red:
4.) Under the “DDF Platform” heading, ensure that the “Configuration” tab is selected.
5.) Under the “Name” heading, select “HTTP to HTTPS Proxy Settings” and the settings menu appears.
6.) Under “Hostname”, enter in the Hostname to use for HTTPS connection in the proxy.
7.) Click “Save changes” to save the Hostname.
Enable SSL for Clients
In order for outbound secure connections (HTTPS) to be made from components like Federated Sources and Resource Readers configuration may need to be updated with keystores and security properties. These values are configured in the <DDF_INSTALL_DIR>/etc/system.properties file. The following values can be set:
| Property | Sample Value | Description |
|---|---|---|
javax.net.ssl.trustStore |
etc/keystores/serverTruststore.jks |
The java keystore that contains the trusted public certificates for Certificate Authorities (CA’s) that can be used to validate SSL Connections for outbound TLS/SSL connections (e.g. HTTPS). When making outbound secure connections a handshake will be done with the remote secure server and the CA that is in the signing chain for the remote server’s certificate must be present in the trust store for the secure connection to be successful. |
javax.net.ssl.trustStorePassword |
changeit |
This is the password for the truststore listed in the above property |
javax.net.ssl.keyStore |
etc/keystores/serverKeystore.jks |
The keystore that contains the private key for the local server that can be used to signing and encryption. This must be set if establishing outgoing 2-way (mutual) SSL connections where the local server must also present it’s certificate for the remote server to verify. |
javax.net.ssl.keyStorePassword |
changeit |
The password for the keystore listed above |
javax.net.ssl.keyStoreType |
jks |
The type of keystore |
https.cipherSuites |
TLS_DHE_RSA_WITH_AES_128_CBC_SHA, |
The cipher suites that are supported when making outbound HTTPS connections |
https.protocols |
TLSv1.1,TLSv1.2 |
The protocols that are supported when making outbound HTTPS connections |
Configuring DDF with New Certificates
DDF ships with a default security certificate configured to identify the DDF instance machine as "localhost." This allows the DDF distribution to be unzipped and run immediately in a secure manner. If the installer was used to install the DDF and a hostname other than 'localhost' was given, a new cert for the hostname was generated and added to the keystore. If the hostname was left as 'localhost' or the hostname was changed after installation, in order to access the DDF instance from another machine over HTTPS (now the default for many services) the default certificates need to be replaced with a certificate that uses the fully qualified hostname of the server running the DDF instance.
Important Terms
| Term | Definition | Example |
|---|---|---|
DDF_HOME |
The path to the unzipped DDF distribution |
/opt/ddf/ddf-2.6.0 |
alias |
The nickname given to a certificate within a keystore to make it easily identifiable. Normally the alias should be the DDF instance’s FQDN. |
localhost |
certificate |
A combination of an entity’s identity information with the entity’s public key. The entity can be a person, organization, or something else, but in this case the entity is a computer on the network. To be valid, a certificate must be digitally (cryptographically) signed by a certifcate authority. By signing a certificate, the CA attests that the public key truly belongs to the entity and no one else. See also PKIX. |
<FQDN>.crt |
CN |
Common Name - The FQDN of the DDF instance as defined within the Certificate. |
search.codice.org |
certification path |
A list of certificates, starting with the server’s certificate and followed certificate of the CA who signed the server’s CSR. The list of certificates continues, with each subsequent certificate belonging to the CA that signed the current CA’s certificate. This chain continues until it reaches a trusted anchor, or root CA certificate. The chain establishes a link between the trust anchor and the server’s certificate. See IETF RFC 4158 for details. |
|
chain of trust |
See certification path. |
|
CSR |
Certificate Signing Request. A certificate that has not yet been signed by a certificate auhority. |
<FQDN>.csr |
digital certificate |
See certificate. |
|
FQDN |
Fully Qualified Domain Name |
search.codice.org |
HTTPS |
Hyper-Text Transfer Protocol Secure. An encrypted alternative to HTTP. The HTTP connection is encrypted over TLS. See IETF RFC 2818 for more information. |
https:// |
JKS |
Java keystore. A dictionary of cryptographic objects (e.g. private keys, certificates) referenced by an alias. The JKS format is specific Java. |
|
keystore |
Refers to either a JKS keystore or a PKCS#12 keystore. For the purposes of these instructions a keystore is always a file. |
|
keytool |
The Java keytool is a key and certificate management command line utility. |
|
openssl |
The openssl program is a command line tool for using the various cryptography functions of OpenSSL’s crypto library from the shell. |
|
PKCS#12 |
Personal Information Exchange Syntax. A standard that allows certifcates, private keys, and optional attributes to be combined into a single file. See IETF RFC 7292 for more information. |
<FQDN>.p12 |
PKIX |
A public key infrastructure also know as X.509. It is documented in the IEFT RFC 5280 and defines what a certificate is. |
|
PORT |
TCP Port of service |
8993 |
security certificate |
See certificate. |
|
TLS |
Transport Layer Security protocol. Provides privacy and data integrity between client and server. See IETF RFC 5246 for more information. |
Update DDF Configuration
Configure DDF Web Service Providers
By default Solr, STS server, STS client and the rest of the services use the system property 'org.codice.ddf.system.hostname' which is defaulted to 'localhost' and not to the fully qualified domain name of the DDF instance. Assuming the DDF instance is providing these servcies, the configuration must be updated to use the DDF instance’s fully qualified domain name as the service provider.
This can be done by editing the system.properties in <INSTALL_DIRECTORY>/etc/
|
The process of changing the configuration can cause users to loose access to the DDF instance via the Web. This includes losing access to the Admin UI (https://localhost:8993/admin) and the Felix console (https://localhost:8993/system/console). Without access to these bundles, it is not possible to configure the DDF instance through a Web browser. |
|
Even if Web access is lost, the DDF instance can still be configured using the DDF command line console. |
-
Start the DDF instance, if it is not already running.
-
Go to https://localhost:8993/admin and step through the installer setting the hostname to the instance’s FQDN and the port to correct value.
Configure Files in HOME Directory Hierarchy
|
The passwords configured in this section reflect the passwords used decrypt Java keytstores; the passwords tied to the JKS files. Changing these values without also changing the passwords of the JKS causes undesirable behavior. |
-
In
<DDF_HOME>/etc/user.properties, modify the line:
localhost=localhost,group,admin,manager,viewer,webconsole
To be:
<FQDN>=<PASSWORD>,group,admin,manager,viewer,webconsole
-
Next ,configure
<DDF_HOME>/etc/system.properties
#START DDF SETTINGS # Set the keystore and truststore Java properties javax.net.ssl.keyStore=etc/keystores/serverKeystore.jks javax.net.ssl.keyStorePassword=<NewPassword> javax.net.ssl.trustStore=etc/keystores/serverTruststore.jks javax.net.ssl.trustStorePassword=<NewPassword> javax.net.ssl.keyStoreType=jks
Create Private Key and Certificate Signing Request
|
These steps assume that |
Open a console or terminal and change directory
$> cd <DDF_HOME>/etc/certs
Create a new RSA key pair (public and private keys) and use the public key to create a certificate signing request. This happens with a single command to openssl
$> openssl req -newkey rsa:2048 -keyout <FQDN>.key -out <FQDN>.csr
The command will encrypt the private key to protect it. The command prompts for a pass phrase that will decrypt the private key:
writing new private key to '....key' Enter PEM pass phrase:
Enter a pass phrase (this documentation uses the pass phrase changeit).
The command then prompts for identity information:
-
For country, use the same two-letter ISO code as the certificate authority. For the Demo CA, use "US".
-
For the "Common Name" or "CN" enter the Fully Qualified Domain Name (FQDN) of the system. An example of a FQDN would be "search.codice.org".
This command creates two files:
-
<FDQN>.csr, the certificate signing request. The CSR includes the public key that matches the private key. -
<FQDN>.key, a private key cryptogrpahically linked to the CSR
Sign the CSR to Create a Valid Certificate
|
This step assume Demo CA signs the CSR. If the CSR is signed by a different CA, skip this step. |
$> openssl ca -config openssl-demo.cnf -policy policy_anything -passin "pass:secret" -in <FQDN>.csr -out <FQDN>.crt
The command prompts:
>Certificate is to be certified until Jul 23 21:06:37 2016 GMT (365 days)
>Sign the certificate? [y/n]:
Enter y and select return. It responds:
>1 out of 1 certificate requests certified, commit? [y/n]
Enter y and select return again.
>Write out database with 1 new entries
>Data Base Updated
A new .crt file is writen to the current directory. By default, the certificate is valid for one year. This period can be changed using openssl command line options or by editing the openssl configuration file, openssl-demo.cnf.
Create PKCS#12 File
The private key, certificate need to be combined into a new structure that can be imported into the keystore. A chain of trust file also needs to be created and combined with the other objects. A file that conforms to the PKCS#12 specification is created from these objects as an intermediary step in the process.
Creating the Chain of Trust
-
If the certificate was signed by the Demo CA, skip this step. It is not necessary in because
openssladds the issuer’s (signer’s) certificate to the PKCS12 file to create the complete chain of trust. -
Establasihing a chain of trust is imporant if a root CA did not directly sign the CSR. It is important for establishing a connection between different branches/orgganizations/departments.
| ==== Although a single certificate can have only one issuer (cannot have more than one CA signature), different certificate chains can exist for the same target certificate because more than one certificate can exist containing the same subject and public key. ==== |
-
Concatenate the certificate from the issuer, all intermediate certificate authorities, and the root authority into a text file.
$> cat <root CA PEM> <intermediate CA 1 PEM> <intermediate CA 2 PEM> > chain.txt
This command created the file chain.txt. This file an intermediate file used to create the PKCS#12 file.
|
If a certificate chain file was created, add the following options to the next command: -chain -CAfile chain.txt |
Create the PKCS#12 file. The argument for -passin is the password that was used to encrypt the private key. The argument for -passout sets the password used to decrypt the PKCS#12 file.
$> openssl pkcs12 -in <FQDN>.crt -inkey <FQDN>.key -certfile demoCA/cacert.pem -out <FQDN>.p12 -export -name <FQDN> -passin "pass:changeit" -passout "pass:changeit"
|
Use openssl verify -CAfile ./demoCA/cacert.pem <FQDN>.crt |
Install Signed Certificate Into Key Store
Use the Java keytool to import the signed certificate into the server keystore file. The keytool needs the password word to decrypt the PKCS#12 file (-srcstorepass) and the password to decrypt the existing server keystore.
|
This example assumes the passwords are |
$> keytool -importkeystore -srckeystore <FQDN>.p12 -srcstoretype pkcs12 -destkeystore <DDF_HOME>/etc/keystores/serverKeystore.jks -srcalias <FQDN> -deststorepass changeit -srcstorepass changeit
|
Check the contents of a Java keystore with the keytool: keytool -list -keystore <{branding}_HOME>/etc/keystore/serverKeystore.jks
Add keytool -list -v -keystore <{branding}_HOME>/etc/keystore/serverKeystore.jks
|
Remove localhost Key Entry
The server key store comes configured with a key for localhost. It should be removed when a certificate with the server’s FQDN certificate is installed. If the localhost key exists, the Solr server uses the that key to sign messages. The browser and other client processes will halt when trying to connect to Solr because the client expects common name to be the Solr processes’s <FQDN>. Use the keytool to remove the localhost entry:
$> keytool -delete -keystore <{branding}_HOME>/etc/keystore/serverKeystore.jks -alias localhost
Import CA Certificates into the Keystore
-
The Demo CA certificate is already imported into the keystore file that is included in the DDF distribution. If the FQDN certificate was signed by the Demo CA, skip this step.
Import CA Certificates into the Trustsore
-
The Demo CA certificate is already imported into the truststore file that is included in the DDF distribution. If the FQDN certificate was signed by the Demo CA, skip this step.
-
The truststore is created from the CA certificates. The CA should be the only entry needed in the trust store.
# import each CA into truststore $> keytool -import -trustcacerts -alias <CA alias> -file <CA PEM> -keystore serverTruststore.jks
|
When federating with other DDF instances that do not share the same CA, you will need to import the CA certs from the other federated instances and they will need to import yours. |
Restart and Test
Finally, restart the DDF instance. Browse the Admin UI at https://<FQDN>:8993/admin to test changes.
|
If the server’s fully qualifiied domain name is not recognized, the name may need to be added to the netowrk’s DNS server. |
|
The DDF instance can be tested even if there is no entry for the FQDN in the DNS. First, test if the FQDN is already recognized. Execute this command: ping <FQDN> If the command responds with an error mesage such as unknown host, then modify the system’s hosts file to point the server’s FQDN to the loopback address. For example: 127.0.0.1 <FQDN> |
Configuring a Java Keystore for Secure Communications
|
The following information was sourced from https://www.racf.bnl.gov/terapaths/software/the-terapaths-api/example-java-client/java-client/setting-up-keystores-with-jetty-and-keytool. |
Create a Client Keystore
The following steps define the procedure for using a PKCS12 certificate. This is the most popular format that is used when exporting from a web browser.
-
Obtain a personal ECA cert (client certificate).
-
Open Internet Explorer → Tools → Options.
-
Select the Content tab.
-
Select Certificates.
-
Select the Personal tab.
-
Select the certificate to be exported. Choose the certificate without a "Friendly Name" and is not the "Encryption Cert".
-
Select the Export button.
-
Follow the steps in the Certificate Export Wizard.
-
When a prompt requests to export the private key, select the Yes button.
-
-
Download a jetty 6.1.5 distribution from http://dist.codehaus.org/jetty/jetty-6.1.5/jetty-6.1.5.zip.
-
Unpack the jetty distribution and place the client certificate (the one just exported) in the lib directory.
-
Navigate to the lib directory of the jetty distribution in a command console.
-
Add a cert to a new Java keystore, replacing cert with the name of the PKCS12 keystore to be converted.
-
Replace
clientKeystorewith the desired name of the Java keystore:
java -cp jetty-6.1.5.jar org.mortbay.jetty.security.PKCS12Import cert.p12 clientKeystore.jks -
Enter the two passwords when prompted.
-
Input keystore passphrase is the passphrase that is used to protect cert.p12.
-
Output keystore passphrase is the passphrase that is set for the new Java keystore clientKeystore.jks.
-
-
It is recommended to set the private key password to the same as the keystore password due to limitations in Java.
-
Run the following command to determine the alias name of the added current entry. It is listed after Alias Name:
keytool -list -v -keystore clientKeystore.jks -
Clone the existing key using the java keytool executable, filling in
<CurrentAlias>,<NewAlias>,clientKeystore.jks, andpasswordwith the correct names.
keytool -keyclone -alias "<CurrentAlias>" -dest "<NewAlias>" -keystore clientKeystore.jks -storepass password -
When prompted for a password, use the same password used when the keystore was created.
-
Delete the original alias.
keytool -delete -alias "<CurrentAlias>" -keystore clientKeystore.jks -storepass password
|
After the keystore is successfully created, delete the jetty files used to perform the import. |
Create a Truststore
keytool -import -trustcacerts -alias "Trusted Cert" -file trustcert.cer -keystore truststore.jks
. Enter in a keystore password when prompted.
Add a Certificate to an Existing Keystore
-
Import the certificate into a Java keystore as a certificate.
keytool -importcert -file newcert.cer -keystore clientKeystore.jks -alias "New Alias" -
Enter in the keystore password, if prompted.
Configuring WSS Security
-
Add system console to whitelisted contexts.
-
-
Select DDF Security.
-
Select Configuration tab.
-
Select the Web Context Policy Manager.
-
Add
/system/consoleto the Whitelisted Contexts
-
-
|
By default, the Catalog Backup Post-Ingest Plugin is NOT enabled. To enable, the Enable Backup Plugin configuration item must be checked in the Backup Post-Ingest Plugin configuration. Enable Backup Plugin: true Assumes hostname of 'ddf' |
-
Configure Catalog External Solr Catalog Provider
-
Change the HTTP URL to:
https://ddf:8993/solr
-
-
Configure Persistent Store
-
Solr URL:
https://ddf:8993/solr
-
-
Configure Catalog Federation Strategy
-
Change Solr URL to:
https://ddf:8993/solr
-
-
Configure Security STS Client
-
Change the STS WSDL Address to:
https://ddf:8993/services/SecurityTokenService?wsdl
-
-
Configure Security STS Server
-
SAML Assertion Lifetime:
86400 -
Change Token Issuer to:
ddf -
Change Signature Username to:
ddf -
Change Encryption Username to:
ddf
-
-
Configure Security STS LDAP Login
-
features:install security-sts-ldaplogin -
LDAP URL:
ldaps://ddf:1636 -
SSL Keystore Alias:
ddf -
Configure Platform Global Configuration
-
Protocol:
https -
Host:
ddf -
Port:
8993
-
-
Configure the Web Context Policy Manager
-
In White Listed Contexts, add /sso
-
-
Configuring the Embedded LDAP
|
The Embedded LDAP has hard-coded values for the keystore path, truststore path, keystore password, and truststore password (
A workaround is to modify |
-
The default password in
config.ldifforserverKeystore.jksischangeit. This needs to be modified to password.-
ds-cfg-key-store-file: ../../keystores/serverKeystore.jks -
ds-cfg-key-store-type: JKS -
ds-cfg-key-store-pin: password -
cn: JKS
-
-
The default password in
config.ldifforserverTruststore.jksischangeit. This needs to be modified to password.-
ds-cfg-trust-store-file: ../../keystores/serverTruststore.jks -
ds-cfg-trust-store-pin: password -
cn: JKS
-
-
If using the default keystores and certificates, start the
opendj-embedded app:start opendj-embedded -
Shutdown DDF
-
<ddf-home>/bin/shutdown -f
-
-
Add the newly created keystore and truststore
-
put the newly created
serverKeystore.jksin<ddf-home>/etc/keystores -
put the newly created
serverTruststore.jksin<ddf-home>/etc/keystores
-
-
Configure system properties
-
In
<ddf-home>/etc/system.properties, modify the keystore and truststore paths and passwords as appropriate.
-
#START DDF SETTINGS
# Set the keystore and truststore Java properties
javax.net.ssl.keyStore=etc/keystores/serverKeystore.jks
javax.net.ssl.keyStorePassword=password
javax.net.ssl.trustStore=etc/keystores/serverTruststore.jks
javax.net.ssl.trustStorePassword=password
javax.net.ssl.keyStoreType=jks
# Set the global url properties
org.codice.ddf.system.protocol=https://
org.codice.ddf.system.hostname=ddf
org.codice.ddf.system.httpsPort=8993
org.codice.ddf.system.httpPort=8181
org.codice.ddf.system.port=8993
org.codice.ddf.system.rootContext=/services
# HTTPS Specific settings. If making a Secure Connection not leveraging the HTTPS Jaav libraries and
# classes (e.g. if you are using secure sockets directly) then you will have to set this directly
https.cipherSuites=TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_DSS_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA
https.protocols=TLSv1,TLSv1.1,TLSv1.2
-
Configure users properties
-
In
<ddf-home>/etc/users.properties, changelocalhost=localhost,admintoddf=ddf,admin
-
-
Start ddf
-
For Windows, run
<ddf-home>/bin/ddf.bat -
For *Nix, run
<ddf-home>/bin/ddf -
Admin Console:
https://ddf:8993/admin -
Search UI: https://ddf:8993/search
-
-
Remove system/console from whitelist
-
-
Select DDF Security.
-
Select Configuration tab.
-
Select the Web Context Policy Manager.
-
Remove
/system/consolefrom the Whitelisted Contexts
-
-
Configuring Solr Catalog Provider
Configure the Solr Catalog Provider Data Directory
The Solr Catalog Provider writes index files to the file system. By default, these files are stored under $DDF_HOME/data/solr/catalog/data. If there is inadequate space in $DDF_HOME, or if it is desired to maintain backups of the indexes only, this directory can be changed.
In order to change the Data Directory, the system.properties file in $DDF_HOME/etc must be edited prior to starting DDF.
1
2
# Uncomment the following line and set it to the desired path
#solr.catalog.data.dir=/opt/solr/catalog/data
Changing the Data Directory after DDF has ingested data
-
Shut down DDF.
-
Create the new directory to hold the indexes.
Make new Data Directorymkdir /path/to/new/data/dir -
Copy the indexes to the new directory.
Copy the indexes to the new Directory.cp /path/to/old/data/dir/* /path/to/new/data/dir/. -
Set the system.properties file to use the new directory.
Set the SOLR_CATALOG_DATA_DIRsolr.catalog.data.dir=/path/to/new/data/dir -
Restart DDF.
Configuring DDF Application and Configuration Clustering
An essential part to the DDF Clustering solution is the ability to manage applications and configurations among cluster nodes. The following documentation will help in setting up a set of DDF server nodes in a cluster, and allow an administrator to manage the applications and configurations that are deployed.
Set Up DDF Cluster
Initial Setup
The goal of the DDF Cluster solution is to keep applications and configurations synchronized on every DDF server. When setting up a DDF server, it is recommended that only one server is setup per machine (virtual or non-virtual). This means that for each physical machine or virtual machine (VM) that is setup, only one instance of the DDF is deployed on it. This configuration provides the least complexity in configuration of the DDF Cluster and allows the DDF server itself to utilize the resources of the entire system. It is also recommended that all DDF servers and systems be configured the same. This means that all DDF server platforms have the same configurations and applications running initially. Also, ensure that all physical and virtual systems running the DDF servers have the same configuration (e.g., operating system, memory, CPU, etc.).
Start and Configure New DDF Server Node Clusters
Before starting your DDF cluster nodes, you must first setup your node network configurations. See the Configuring DDF Clustering For Multicast & Unicast section for more information. To view all of the available nodes, navigate in your browser to the DDF Admin Console (https://localhost:8993/admin) and click on the tab labeled “Clustered Groups”. Under the group “default”, you should see all running DDF nodes that have been clustered. It is also possible to move all instances into a named group. In order to perform this action, you must have access to one of the DDF command line shells directly or utilize Gogo located at https://localhost:8993/system/console/gogo.
features:install cellar
The first action that will be performed is the creation of a new group. To create a new cluster group named “mygroup”, execute the following command:
cluster:group-create mygroup
This command will create a new cluster group which will not contain any nodes. Execute the following command to view all groups:
cluster:group-list
You should see the following output:
Group Members
* [default ] [192.168.1.110:5701* ]
[mygroup ] []
As you can see, there are now two groups, the default group and the new group that you have just created. We now need to move the DDF node out of the “default” group and into “mygroup”. Execute the following commands:
cluster:group-join mygroup 192.168.1.110:5701
cluster:group-quit default 192.168.1.110:5701
The first command allows the DDF node to join the new group. The second command removes the DDF node from the “default” group.
Group Members
[default ] []
* [mygroup ] [192.168.1.110:5701* ]
These commands should be executed on all “production” nodes located in the default group. Note: The default group will always remain and cannot be deleted. It is possible for DDF nodes to be separated into multiple groups. It is also possible for one DDF node to exist in multiple groups. These are advanced topics and can be addressed in additional documentation links.
Add a DDF Server Node to the Cluster
There may be a need for an additional DDF server node after an existing DDF cluster has already been configured and deployed. It is important that when adding additional nodes, the new node must match the existing nodes in terms of applications and configurations. Therefore it is good practice to copy one of the existing nodes and push that copy to a new virtual machine or server machine instance. This will provide a stabler transition of the new node into the cluster. Once you have setup the new node, you can follow the instructions above to add the new node into the cluster group, if needed.
Manage Applications
Application management within the cluster has been designed to function as if you were only managing one instance of DDF. All DDF applications are handled at the feature level. Therefore, the Features section of the Admin Console can be used to manage all applications within the DDF Cluster. The Features section of the Web Console can be accessed by navigating to https://localhost:8993/system/console/features/features in a web browser. Credentials may be required (username and password) to access the console.
From the Features console, the following functions are available:
-
Provides a listing of the available features and repositories in the cluster
-
Add new repositories to the cluster
-
Remove existing repositories from the cluster
-
Install features from repositories into the cluster
-
Uninstall or remove features from the cluster
For more information on using the console, refer to the DDF User documentation.
Manage Configurations
Just as with Application Management, managing configurations within a cluster was designed to function as if you were configuring one instance of DDF. There are two areas where DDF configurations can be modified. Most modifications will occur within the Configurations section of the Web Console. The Web Console can be accessed by navigating to https://localhost:8993/system/console in a web browser. Credentials may be required (username and password) to access the console.
From the Configuration console, the following features are available:
-
Provides a listing of the available configurations
-
Edit configuration values in the cluster
-
Unbind the configuration from the bundle
-
Remove the configuration from the cluster
For more information on using the console, refer to the DDF User’s Guide.
Control Feature and Configuration Synchronization
There may be instances where certain configurations are to remain local to a certain DDF node. This behavior can be controlled through the cellar groups configuration. To open this configuration, navigate to Within the configuration list, search for the configuration with name “org.apache.karaf.cellar.groups”. Click on the configuration to view / edit. Within the file you will see many configurations listed with the following format:
[cluster group name] . [configuration type (e.g. feature, configuration, etc.)]. [list type] = [values]
These configurations allow you to control the synchronization of features and configurations through blacklists and whitelists. If you do not want a specific feature or configuration to propagate throughout your cluster group, you can put it into a blacklist. By default for all cluster groups, all features are in the white list:
mygroup.features.whitelist.outbound = *
with the exception of “cellar”:
mygroup.features.blacklist.outbound = cellar
As for configurations, by default all configurations are whitelisted with the exception of the following:
mygroup.config.blacklist.outbound = org.apache.felix.fileinstall*, org.apache.karaf.cellar.groups, org.apache.karaf.cellar.node, org.apache.karaf.management, org.apache.karaf.shell, org.ops4j.pax.logging
Once you have made your changes, you can save the configuration by pressing the “Save” button.
Additional Details
Configure DDF Clustering for Unicast and Multicast
By default, DDF clustering utilizes TCP-IP unicast for discoverying other DDF nodes. The hazelcast.xml file located under <DDF root>/etc/ contains the port and address configurations for network setup. The TCP-IP unicast mode has been setup to allow for manual configuration and control of initial clustering. This configuration is also beneficial for cases where a particular network cannot support multicast or multicast has been turned off for certain reasons. There is a configuration which allows auto-discovery of DDF nodes and utilizes multicast as a transport. The hazelcast.xml file is configured like the following to allow for TCP-IP unicast discovery of cluster nodes:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
<join>
<multicast enabled="false">
<multicast-group>224.2.2.3</multicast-group>
<multicast-port>54327</multicast-port>
</multicast>
<tcp-ip enabled="true">
<interface>127.0.0.1</interface>
</tcp-ip>
<aws enabled="false">
<access-key>my-access-key</access-key>
<secret-key>my-secret-key</secret-key>
<region>us-east-1</region>
</aws>
</join>
As you can see, the multicast option has been set to false and the tcp-ip option is set to true. All systems that will participate in the cluster need to have their ip addresses listed within the interface section highlighted. These modifications must be made for each node. Once these modifications have been made to the hazelcast.xml file, it is recommended that the nodes be restarted. The following hazelcast.xml configuration would be used for multicast auto-discovery:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
<join>
<multicast enabled="true">
<multicast-group>224.2.2.3</multicast-group>
<multicast-port>54327</multicast-port>
</multicast>
<tcp-ip enabled="false">
<interface>127.0.0.1</interface>
</tcp-ip>
<aws enabled="false">
<access-key>my-access-key</access-key>
<secret-key>my-secret-key</secret-key>
<region>us-east-1</region>
</aws>
</join>
As you can see, the multicast option has been set to true and the tcp-ip option is set to false. A multicast group and port can be specified in the file as highlighted above. These modifications must be made for each node. Once these modifications have been made to the hazelcast.xml file, it is recommended that the nodes be restarted.
Verify Synchronized DDF Nodes
In most cases, the DDF system console should provide you with a listing of all features, repositories, and configurations that are installed on the cluster. There are times when the cluster can become out of sync. This instance may occur if a system has been offline for some time. One way to verify the synchronized lists of the cluster is to run cluster commands from the command line. In order to perform these actions, you must have access to one of the DDF command line shells directly or through the use of Gogo (http://localhost:8993/system/console/gogo). Once at the command line, execute the following command to see the list of deployed features for your cluster:
cluster:feature-list mygroup
This command will list the available features for your cluster group “mygroup”.
Features for cluster group mygroup
Status Version Name
[installed ] [2.2.0 ] catalog-opensearch-endpoint
......
To view the cluster group’s configurations, execute the following command:
cluster:config-list mygroup
This command will show all shared configurations among the cluster group “mygroup”.
----------------------------------------------------------------
Pid: org.ops4j.pax.url.mvn
Properties:
org.ops4j.pax.url.mvn.useFallbackRepositories = false
service.pid = org.ops4j.pax.url.mvn
org.ops4j.pax.url.mvn.disableAether = true
----------------------------------------------------------------
Pid: org.apache.karaf.webconsole
Properties: ....
The following command will list all repositories associated with the cluster group “mygroup”:
cluster:features-url-list mygroup
The following will be displayed:
mvn:org.apache.cxf.karaf/apache-cxf/2.7.2/xml/features
mvn:org.apache.activemq/activemq-karaf/5.6.0/xml/features
mvn:ddf.catalog.kml/catalog-kml-app/2.1.0/xml/features
mvn:ddf.mime.tika/mime-tika-app/1.0.0/xml/features
If for any reason, any of the lists above do not match the list of features, repositories, or configurations found in the DDF system consoles, the following command can be executed:
cluster:sync
This command should allow for a DDF node to be synchronized with the rest of the cluster.
Check for Active Nodes
Checking whether a node is active or not can be done utilizing the node ping command. In order to use this command you must have access to one of the the DDF command line shells. A list of nodes can be shown by executing the following command:
cluster:node-list
The command should show the following output:
ID Host Name Port
* [192.168.1.110:5701 ] [192.168.1.110 ] [ 5701]
The output will show the ID, host name, and port of each active DDF node in the cluster. The asterisk shows which node you are currently accessing the shell on. Now that you have a listing of node IDs, you can use these to ping other nodes. Execute the following command:
cluster:node-ping
The following result will print out until you press ctrl-c:
PING 192.168.1.110:5701
from 1: req=192.168.1.110:5701 time=9 ms
from 2: req=192.168.1.110:5701 time=4 ms
from 3: req=192.168.1.110:5701 time=2 ms
from 4: req=192.168.1.110:5701 time=3 ms
from 5: req=192.168.1.110:5701 time=4 ms
from 6: req=192.168.1.110:5701 time=2 ms
from 7: req=192.168.1.110:5701 time=2 ms
^C
The output will provide you with a typical ping result showing connectivity and response times.
Configuring Catalog Provider
This scenario describes how to reconfigure DDF to use a different catalog provider. This scenario assumes DDF is already running.
|
Use of the Dummy Catalog Provider
This scenario uses the Dummy Catalog Provider as the catalog provider DDF is being reconfigured to use. This is because the Dummy Catalog Provider is the only other catalog provider shipped with DDF out of the box. The Dummy Catalog Provider should never be used in a production environment. It is only used for testing purposes. |
Reconfigure
-
Uninstall a Catalog Provider (if installed) by completing the procedure in the Uninstalling Features section.
-
Install the new Catalog Provider, which will be the , by installing its feature ddf-provider-dummy by completing the instructions in the Installing Features section.
-
Verify DDF is running with the Dummy Provider as its Catalog Provider.
-
Select the Services tab in the Web Console.
-
Locate the column labeled Bundle on the right. If DDF is running with the Dummy Provider, there is an entry labeled ddf.providers.provider-dummy, as shown below.
-
Id Type(s) Bundle
325 [ddf.catalog.source.CatalogProvider] ddf.providers.provider-dummy (175)
osgi.service.blueprint.compname DummyProvider
Configuring Notifications
Notifications are messages that are sent to clients to inform them of some significant event happening in DDF. Clients must subscribe to a DDF notification channel to receive these messages.
Usage
DDF notifications are currently being utilized in the DDF Catalog application for resource retrieval. When a user initiates a resource retrieval via the DDF Standard UI, DDF opens the channel /ddf/notification/catalog/downloads, where notifications indicating the progress of that resource download are sent. Any client interested in receiving these progress notifications must subscribe to that channel. When DDF starts downloading the resource to the client that requested it, a notification with a status of "Started" will be broadcast. If the resource download fails, a notification with a status of "Failed" will be broadcast. Or, if the resource download is being attempted again after a failure, "Retry" will be broadcast.
When a notification is received, DDF Standard UI displays a popup containing the contents of the notification, so a user is made aware of how their downloads are proceeding.
Behind the scenes, the DDF Standard UI invokes the REST endpoint to retrieve a resource. In this request, it adds the query parameter "user" with the CometD session ID or the unique User ID as the value. This allows the CometD server to know which subscriber is interested in the notification. For example, http://DDF_HOST:8993/services/catalog/sources/ddf.distribution/2f5db9e5131444279a1293c541c106cd?transform=resource&user=1w1qlo79j6tscii19jszwp9s2i55 notifications contain the following information:
| Parameter Name | Description | Required by DDF Standard UI |
|---|---|---|
application |
"Downloads" for resource retrieval. This is used as a "type" or category of messages. |
Yes |
title |
Resource/file name for resource retrieval. |
Yes |
message |
Human-readable message containing status and a more detailed message. |
Yes |
timestamp |
Timestamp in milliseconds of when event occurs. |
Yes |
user |
CometD Session ID or unique User ID. |
Yes |
status |
Status of event. |
No |
option |
Resource retrieval option. |
No |
bytes |
Number of bytes transmitted. |
No |
Receive Notifications
-
If interested in retrieve resource notifications, a client must subscribe to the CometD
channel/ddf/notification/catalog/downloads. -
If interested in all notification types, a client must subscribe to the CometD
channel/ddf/notification/** -
A client will only receive notifications for resources they have requested.
-
DDF Standard UI is subscribed to all notifications of interest to that
user/browser session: /ddf/notification/** -
See the Usage section for the data that a notification contains.
Publish Notifications
Any application running in DDF can publish notifications that can be viewed by the DDF Standard UI or received by another notifications client. . Set a properties map containing entries for each of the parameters listed above in the Usage section.
+
. Set the OSGi event topic to ddf/notification/<application-name>/<notification-type>. Notice that there is no preceding slash on an OSGi event topic name, while there is one on the CometD channel name. The OSGi event topic corresponds to the CometD channel this is published on.
+ . Post the notification to the OSGi event defined in the previous step.
1
2
3
4
5
6
7
8
9
10
11
12
13
Dictionary <String, Object> properties = new Hashtable<String, Object>();
properties.put("application", "Downloads");
properties.put("title", resourceResponse.getResource().getName());
Long sysTimeMillis = System.currentTimeMillis();
properties.put("message", generateMessage(status, resourceResponse.getResource().getName(), bytes, sysTimeMillis, detail));
properties.put("user", getProperty(resourceResponse, USER));
properties.put("status", "Completed");
properties.put("bytes", 1024);
properties.put("timestamp", sysTimeMillis);
Event event = new Event("ddf/notification/catalog/downloads", properties);
eventAdmin.postEvent(event);
Configuring Thread Pools
The system.properties file found under $DDF_HOME/etc contains properties that will be made available through system properties at the beginning of Karaf’s boot process. The org.codice.ddf.system.threadPoolSize property can be used to specify the size of thread pools used by:
* Federating requests between DDF systems
* Downloading resources
* Handling asynchronous queries, such as queries from the UI
By default, this value is set to 128. It is not recommended to set this value extremely high. If unsure, leave this setting at it’s default value of 128.
Configuring Global System Properties
The system.properties file found under $DDF_HOME/etc contains properties that will be made available through the system properties on startup. After changing any of these properties you will need to restart the system. If you change the 'hostname' property you will also need to configure the certs as described in Configuring DDF with New Certificates.
The system by default uses both http and https so both httpsPort and httpPort need to be specified. The protocol and port properties are the defaults the system should use in places where either http or https could be valid.
org.codice.ddf.system.protocol=https:// #one of http:// or https:// org.codice.ddf.system.hostname=localhost #should be the fully qualified domain name org.codice.ddf.system.httpsPort=8993 #secure port org.codice.ddf.system.httpPort=8181 #public port org.codice.ddf.system.port=8993 #default port corresponding to the protocol selected. Should match the httpPort or httpsPort org.codice.ddf.system.rootContext=/services #the root context that services will be made available under org.codice.ddf.system.siteName=ddf.distribution #the name of this instance org.codice.ddf.system.siteContact= #contact for this instance org.codice.ddf.system.version=2.8.2 #this instances version org.codice.ddf.system.organization=Codice Foundation #who owns/runs this instance
The above properties (along with any other system properties) are available to be used as variable parameters in input url fields within the admin UI. For example if you wanted to enter the url for the csw service you could write
${org.codice.ddf.system.protocol}${org.codice.ddf.system.hostname}:${org.codice.ddf.system.port}${org.codice.ddf.system.rootContext}/csw
instead of
https://localhost:8993/services/csw
The variable version is longer but will not need to be changed if the system host, port or root context changes.
Managing Web Service Security
Configuring WSS
-
Add system console to whitelisted contexts
-
-
Navigate to the DDF Security
-
Navigate to Web Context Policy Manager
-
Add /system/console to the Whitelisted Contexts
-
-
|
By default, the Catalog Backup Post-Ingest Plugin is NOT enabled. To enable, the Enable Backup Plugin configuration item must be checked in the Backup Post-Ingest Plugin configuration. Enable Backup Plugin: true Assumes a hostname of 'ddf' |
|
DDF is enabled with an Insecure Defaults Service which will warn users/admins if the system is configured with insecure defaults. A banner is displayed on the admin console notifying "The system is insecure because default configuration values are in use." A detailed view is available of the properties to update. |
-
Configure Catalog External Solr Catalog Provider
-
Change the HTTP URL to:
https://ddf:8993/solr
-
-
Configure Persistent Store
-
Solr URL:
https://ddf:8993/solr
-
-
Configure Catalog Federation Strategy
-
Change Solr URL to:
https://ddf:8993/solr
-
-
Configure Security STS Client
-
Change the STS WSDL Address to:
https://ddf:8993/services/SecurityTokenService?wsdl
-
-
Configure Security STS Server
-
SAML Assertion Lifetime:
86400 -
Change Token Issuer to:
ddf -
Change Signature Username to:
ddf -
Change Encryption Username to:
ddf
-
-
Configure Security STS LDAP Login
-
features:install security-sts-ldaplogin -
LDAP URL:
ldaps://ddf:1636 -
SSL Keystore Alias:
ddf -
Configure Platform Global Configuration
-
Protocol:
https -
Host:
ddf -
Port:
8993
-
-
Configure the Web Context Policy Manager
-
In White Listed Contexts, add /sso
-
-
Install and Configure the Embedded LDAP
|
The Embedded LDAP is not recommended for production use. It is only recommended to be used for development purposes or extremely small server loads. The Embedded LDAP has hard-coded values for the keystore path, truststore path, keystore password, and truststore password (
A workaround is to modify |
-
The default password in
config.ldifforserverKeystore.jksischangeit. This needs to be modified to password.-
ds-cfg-key-store-file: ../../keystores/serverKeystore.jks -
ds-cfg-key-store-type: JKS -
ds-cfg-key-store-pin: password -
cn: JKS
-
-
The default password in
config.ldifforserverTruststore.jksischangeit. This needs to be modified to password.-
ds-cfg-trust-store-file: ../../keystores/serverTruststore.jks -
ds-cfg-trust-store-pin: password -
cn: JKS
-
-
If using the default keystores and certificates, start the
opendj-embedded app:start opendj-embedded -
Shutdown DDF
-
<ddf-home>/bin/shutdown -f
-
-
Add the newly created keystore and truststore
-
put the newly created
serverKeystore.jksin<ddf-home>/etc/keystores -
put the newly created
serverTruststore.jksin<ddf-home>/etc/keystores
-
-
Configure system properties
-
In
<ddf-home>/etc/system.properties, modify the keystore and truststore paths and passwords as appropriate.
-
#START DDF SETTINGS
# Set the keystore and truststore Java properties
javax.net.ssl.keyStore=etc/keystores/serverKeystore.jks
javax.net.ssl.keyStorePassword=password
javax.net.ssl.trustStore=etc/keystores/serverTruststore.jks
javax.net.ssl.trustStorePassword=password
javax.net.ssl.keyStoreType=jks
# HTTPS Specific settings. If making a Secure Connection not leveraging the HTTPS Java libraries and
# classes (e.g. if you are using secure sockets directly) then you will have to set this directly
https.cipherSuites=TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_DSS_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA
https.protocols=TLSv1,TLSv1.1,TLSv1.2
-
Configure users properties
-
In
<ddf-home>/etc/users.properties, changelocalhost=localhost,admintoddf=ddf,admin
-
-
Start ddf
-
For Windows, run
<ddf-home>/bin/ddf.bat -
For *Nix, run
<ddf-home>/bin/ddf -
Admin Console:
https://ddf:8993/admin -
Search UI: https://ddf:8993/search
-
-
Remove system/console from whitelist
-
-
Navigate to DDF Security
-
Navigate to the Web Context Policy Manager
-
Remove
/system/consolefrom the Whitelisted Contexts
-
-
Auditing
|
The Audit Log default location is DISTRIBUTION_HOME/data/log/security.log |
CAS (SSO) Authentication
|
CAS Authentication Logging was obtained using a CAS war file deployed to a Tomcat application server. Tomcat allows configuration of the log file, but, by default, the logs below were stored in the $TOMCAT_HOME/logs/catalina.out file. |
Username and Password
2013-04-24 10:39:45,265 INFO [org.jasig.cas.authentication.AuthenticationManagerImpl] - <org.jasig.cas.adaptors.ldap.FastBindLdapAuthenticationHandler successfully authenticated [username: testuser1]>
2013-04-24 10:39:45,265 INFO [org.jasig.cas.authentication.AuthenticationManagerImpl] - <Resolved principal testuser1>
2013-04-24 10:39:45,265 INFO [org.jasig.cas.authentication.AuthenticationManagerImpl] - <org.jasig.cas.adaptors.ldap.FastBindLdapAuthenticationHandler@6a4d37e5 authenticated testuser1 with credential [username: testuser1].>
2013-04-24 10:39:45,265 INFO [com.github.inspektr.audit.support.Slf4jLoggingAuditTrailManager] - <Audit trail record BEGIN
=============================================================
WHO: [username: testuser1]
WHAT: supplied credentials: [username: testuser1]
ACTION: AUTHENTICATION_SUCCESS
APPLICATION: CAS
WHEN: Wed Apr 24 10:39:45 MST 2013
CLIENT IP ADDRESS: 127.0.0.1
SERVER IP ADDRESS: 127.0.0.1
=============================================================
>
2013-04-24 10:39:17,443 INFO [org.jasig.cas.adaptors.ldap.FastBindLdapAuthenticationHandler] - <Failed to authenticate user testuser1 with error [LDAP: error code 49 - Invalid Credentials]; nested exception is javax.naming.AuthenticationException: [LDAP: error code 49 - Invalid Credentials]>
2013-04-24 10:39:17,443 INFO [org.jasig.cas.authentication.AuthenticationManagerImpl] - <org.jasig.cas.adaptors.ldap.FastBindLdapAuthenticationHandler failed authenticating [username: testuser1]>
2013-04-24 10:39:17,443 INFO [com.github.inspektr.audit.support.Slf4jLoggingAuditTrailManager] - <Audit trail record BEGIN
=============================================================
WHO: [username: testuser1]
WHAT: supplied credentials: [username: testuser1]
ACTION: AUTHENTICATION_FAILED
APPLICATION: CAS
WHEN: Wed Apr 24 10:39:17 MST 2013
CLIENT IP ADDRESS: 127.0.0.1
SERVER IP ADDRESS: 127.0.0.1
=============================================================
>
PKI Certificate
|
Current testing was performed using the OZone certificates that came with a testAdmin and testUser, which were signed by a common CA. |
2013-04-24 15:13:14,388 INFO [org.jasig.cas.adaptors.x509.authentication.handler.support.X509CredentialsAuthenticationHandler] - <Successfully authenticated CN=testUser1, OU=Ozone, O=Ozone, L=Columbia, ST=Maryland, C=US, SerialNumber=4>
2013-04-24 15:13:14,390 INFO [org.jasig.cas.authentication.AuthenticationManagerImpl] - <org.jasig.cas.adaptors.x509.authentication.handler.support.X509CredentialsAuthenticationHandler successfully authenticated CN=testUser1, OU=Ozone, O=Ozone, L=Columbia, ST=Maryland, C=US, SerialNumber=4>
2013-04-24 15:13:14,391 INFO [org.jasig.cas.authentication.AuthenticationManagerImpl] - <Resolved principal CN=testUser1, OU=Ozone, O=Ozone, L=Columbia, ST=Maryland, C=US>
2013-04-24 15:13:14,391 INFO [org.jasig.cas.authentication.AuthenticationManagerImpl] - <org.jasig.cas.adaptors.x509.authentication.handler.support.X509CredentialsAuthenticationHandler@1e5b04ae authenticated CN=testUser1, OU=Ozone, O=Ozone, L=Columbia, ST=Maryland, C=US with credential CN=testUser1, OU=Ozone, O=Ozone, L=Columbia, ST=Maryland, C=US, SerialNumber=4.>
2013-04-24 15:13:14,394 INFO [com.github.inspektr.audit.support.Slf4jLoggingAuditTrailManager] - <Audit trail record BEGIN
=============================================================
WHO: CN=testUser1, OU=Ozone, O=Ozone, L=Columbia, ST=Maryland, C=US, SerialNumber=4
WHAT: supplied credentials: CN=testUser1, OU=Ozone, O=Ozone, L=Columbia, ST=Maryland, C=US, SerialNumber=4
ACTION: AUTHENTICATION_SUCCESS
APPLICATION: CAS
WHEN: Wed Apr 24 15:13:14 MST 2013
CLIENT IP ADDRESS: 127.0.0.1
SERVER IP ADDRESS: 127.0.0.1
=============================================================
>
The failure was simulated using a filter on the x509 credential handler. This filter looks for a certain CN in the certificate chain and will fail if it cannot find a match. The server was set up to trust the certificate via the Java truststore, but there were additional requirements for logging in. For this test-case, the chain it was looking for is "CN=Hogwarts Certifying Authority.+". Example from the CAS wiki: https://wiki.jasig.org/display/CASUM/X.509+Certificates.
2013-04-25 14:15:47,477 DEBUG [org.jasig.cas.adaptors.x509.authentication.handler.support.X509CredentialsAuthenticationHandler] - <Evaluating CN=testUser1, OU=Ozone, O=Ozone, L=Columbia, ST=Maryland, C=US, SerialNumber=4>
2013-04-25 14:15:47,478 DEBUG [org.jasig.cas.adaptors.x509.authentication.handler.support.X509CredentialsAuthenticationHandler] - <.* matches CN=testUser1, OU=Ozone, O=Ozone, L=Columbia, ST=Maryland, C=US == true>
2013-04-25 14:15:47,478 DEBUG [org.jasig.cas.adaptors.x509.authentication.handler.support.X509CredentialsAuthenticationHandler] - <CN=Hogwarts Certifying Authority.+ matches EMAILADDRESS=goss-support@owfgoss.org, CN=localhost, OU=Ozone, O=Ozone, L=Columbia, ST=Maryland, C=US == false>
2013-04-25 14:15:47,478 DEBUG [org.jasig.cas.adaptors.x509.authentication.handler.support.X509CredentialsAuthenticationHandler] - <Found valid client certificate>
2013-04-25 14:15:47,478 INFO [org.jasig.cas.adaptors.x509.authentication.handler.support.X509CredentialsAuthenticationHandler] - <Failed to authenticate org.jasig.cas.adaptors.x509.authentication.principal.X509CertificateCredentials@1795f1cc>
2013-04-25 14:15:47,478 INFO [org.jasig.cas.authentication.AuthenticationManagerImpl] - <org.jasig.cas.adaptors.x509.authentication.handler.support.X509CredentialsAuthenticationHandler failed to authenticate org.jasig.cas.adaptors.x509.authentication.principal.X509CertificateCredentials@1795f1cc>
2013-04-25 14:15:47,478 INFO [com.github.inspektr.audit.support.Slf4jLoggingAuditTrailManager] - <Audit trail record BEGIN
=============================================================
WHO: org.jasig.cas.adaptors.x509.authentication.principal.X509CertificateCredentials@1795f1cc
WHAT: supplied credentials: org.jasig.cas.adaptors.x509.authentication.principal.X509CertificateCredentials@1795f1cc
ACTION: AUTHENTICATION_FAILED
APPLICATION: CAS
WHEN: Thu Apr 25 14:15:47 MST 2013
CLIENT IP ADDRESS: 127.0.0.1
SERVER IP ADDRESS: 127.0.0.1
=============================================================
>
STS Authentication
Username and Password
[INFO ] 2014-07-17 14:40:23,340 | qtp1401560510-76 | securityLogger | Username [pparker] successfully logged in using LDAP authentication. Request IP: 127.0.0.1, Port: 52365
[INFO ] 2014-07-17 14:40:24,074 | qtp1401560510-76 | securityLogger | Security Token Service REQUEST
STATUS: SUCCESS
OPERATION: Issue
URL: https://server:8993/services/SecurityTokenService
WS_SEC_PRINCIPAL: 1.2.840.113549.1.9.1=#160d69346365406c6d636f2e636f6d,CN=client,OU=I4CE,O=Lockheed Martin,L=Goodyear,ST=Arizona,C=US
ONBEHALFOF_PRINCIPAL: pparker
TOKENTYPE: http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0
CLAIMS_SECONDARY: [http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role, http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier, http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress, http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname, http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname]
Request IP: 127.0.0.1, Port: 52365
[WARN ] 2014-07-17 14:42:43,627 | qtp1401560510-75 | securityLogger | Username [pparker] failed LDAP authentication. Request IP: 127.0.0.1, Port: 52386
[WARN ] 2014-07-17 14:42:43,632 | qtp1401560510-75 | securityLogger | Security Token Service REQUEST
STATUS: FAILURE
OPERATION: Issue
URL: https://server:8993/services/SecurityTokenService
WS_SEC_PRINCIPAL: 1.2.840.113549.1.9.1=#160d69346365406c6d636f2e636f6d,CN=client,OU=I4CE,O=Lockheed Martin,L=Goodyear,ST=Arizona,C=US
ONBEHALFOF_PRINCIPAL: pparker
TOKENTYPE: http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0
CLAIMS_SECONDARY: [http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role, http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier, http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress, http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname, http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname]
EXCEPTION: org.apache.cxf.ws.security.sts.provider.STSException: The specified request failed
Request IP: 127.0.0.1, Port: 52386
PKI Certificate
[INFO ] 2014-07-17 15:03:39,379 | qtp1401560510-74 | securityLogger | Security Token Service REQUEST
STATUS: SUCCESS
OPERATION: Issue
URL: https://localhost:8993/services/SecurityTokenService
WS_SEC_PRINCIPAL: 1.2.840.113549.1.9.1=#160d69346365406c6d636f2e636f6d,CN=client,OU=I4CE,O=Lockheed Martin,L=Goodyear,ST=Arizona,C=US
TOKENTYPE: http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0
CLAIMS_SECONDARY: [http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role, http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier, http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress, http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname, http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname]
Request IP: 127.0.0.1, Port: 52573
[WARN ] 2014-07-17 15:05:46,061 | qtp1401560510-75 | securityLogger | Security Token Service REQUEST
STATUS: FAILURE
OPERATION: Issue
URL: N.A.
TOKENTYPE: N.A.
APPLIESTO: <null>
EXCEPTION: org.apache.cxf.ws.security.sts.provider.STSException: The request was invalid or malformed
Request IP: 127.0.0.1, Port: 52582
Binary Security Token (CAS)
15:27:48,098 | INFO | tp1343209378-282 | securityLogger | rity.common.audit.SecurityLogger 156 | 247 - security-core-api - 2.2.0.RC6-SNAPSHOT | Telling the STS to request a security token on behalf of the binary security token:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<BinarySecurityToken ValueType="#CAS" EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" ns1:Id="CAS" xmlns="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:ns1="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">U1QtMTctQmw0aGRrS05jaTV3cE82Zm11VE0tY2FzfGh0dHBzOi8vdG9rZW5pc3N1ZXI6ODk5My9zZXJ2aWNlcy9TZWN1cml0eVRva2VuU2VydmljZQ==</BinarySecurityToken>
Request IP: 0:0:0:0:0:0:0:1%0, Port: 53363
15:27:48,351 | INFO | tp1343209378-282 | securityLogger | rity.common.audit.SecurityLogger 156 | 247 - security-core-api - 2.2.0.RC6-SNAPSHOT | Finished requesting security token. Request IP: 127.0.0.1, Port: 53363
**This message will show when DEBUG is on**
15:27:48,355 | DEBUG | tp1343209378-282 | securityLogger | rity.common.audit.SecurityLogger 102 | 247 - security-core-api - 2.2.0.RC6-SNAPSHOT | <?xml version="1.0" encoding="UTF-16"?>
<saml2:Assertion>
SAML ASSERTION WILL BE LOCATED HERE
10:54:21,772 | INFO | qtp995500086-618 | securityLogger | rity.common.audit.SecurityLogger 143 | 245 - security-core-commons - 2.2.0.ALPHA5-SNAPSHOT | Telling the STS to request a security token on behalf of the binary security token:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<BinarySecurityToken ValueType="#CAS" EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" ns1:Id="CAS" xmlns="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:ns1="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">U1QtMjctOU43RUlkNHkzVFoxQmZCb0RIdkItY2Fz</BinarySecurityToken>
10:54:22,119 | INFO | qtp995500086-141 | securityLogger | rity.common.audit.SecurityLogger 143 | 245 - security-core-commons - 2.2.0.ALPHA5-SNAPSHOT | Validating ticket [ST-27-9N7EId4y3TZ1BfBoDHvB-cas] for service [https://server:8993/services/SecurityTokenService]. Request IP: 127.0.0.1, Port: 64548
10:54:22,169 | INFO | qtp995500086-141 | securityLogger | rity.common.audit.SecurityLogger 143 | 245 - security-core-commons - 2.2.0.ALPHA5-SNAPSHOT | Unable to validate CAS token. Request IP: 127.0.0.1, Port: 64548
10:54:22,244 | INFO | qtp995500086-618 | securityLogger | rity.common.audit.SecurityLogger 143 | 245 - security-core-commons - 2.2.0.ALPHA5-SNAPSHOT | Error requesting the security token from STS at: https://server:8993/services/SecurityTokenService.
AuditPlugin
DDF provides an optional Audit Plugin that logs all catalog transactions to the security.log.
Information captured includes user identity, query information, and resources retrieved.
Configuring Audit Plugin
The Audit plugin is not enabled by default. To enable, sign into the Admin Console………………………………………………………………….
-
Select DDF Catalog
-
Select Features tab
-
Install both
catalog-security-loggingandcatalog-security-audit-pluginfeatures………………………………………………………………….
Web Context Policy Manager
The Web Context Policy Manager defines all security policies for REST endpoints within DDF. It defines :
-
the realms a context should authenticate against.
-
the type of authentication that a context requires.
-
any user attributes required for authorization.
Configuring Web Context Policy Manager
The karaf realm is the only realm available by default and it authenticates against the user.properties file. As JAAS authentication realms are added to the STS, more realms become available to authenticate against.
For example, installing the security-sts-ldaplogin feature adds an ldap realm. Contexts can then be pointed to ldap realm for authentication and STS will be instructed to authenticate them against ldap. As you add REST endpoints, you may need to add different types of authentication through the Web Context Policy Manager.
| saml | Activates single-sign on (SSO) across all REST endpoints that use. |
|---|---|
basic |
Activates basic authentication. |
PKI |
Activates public key infrastructure authentication |
anon(anonymous) |
provides anonymous access |
CAS SSO Configuration
The Web Service Security (WSS) Implementation that comes with DDF was built to run independent of an SSO or authentication mechanism. Testing out the security functionality of DDF was performed by using the Central Authentication Server (CAS) software. This is a popular SSO appliance and allowed DDF to be tested using realistic use cases. This page contains configurations and settings that were used to help enable CAS to work within the DDF environment.
General Server Setup and Configuration
|
The following procedure defines the steps for installing CAS to a Tomcat 7.x server running in Linux and Windows. Newer versions of tomcat (8.x) are incompatible with the included server.xml file and will need additional changes. |
Install using DDF CAS WAR
DDF comes with a custom distribution of the CAS Web application that comes with LDAP and X.509 support configured and built-in. Using this configuration may save time and make setup easier.
|
The CAS Web Application can be downloaded from Nexus. To find the latest version, execute a search for "cas-distribution". Link to the first release: http://artifacts.codice.org/content/repositories/releases/org/codice/cas/distribution/cas-distribution/1.0.0/cas-distribution-1.0.0.war |
-
Download and unzip Tomcat Distribution [http://tomcat.apache.org/download-70.cgi]. The installation location is referred to as
<TOMCAT_INSTALLATION_DIR>.$ unzip apache-tomcat-7.0.39.zip
-
Clone https://github.com/codice/cas-distribution to a convenient location. This folder will be referred to as cas-distribution.
-
Set up Keystores and enable SSL. There are sample configurations located within the security-cas-server-webapp project.
-
Copy setenv (cas-distribution/src/main/resources/tomcat) to TOMCAT/bin
Linux$ cp cas-distribution/src/main/resources/tomcat/setenv.sh <TOMCAT_INSTALLATION_DIR>/bin/
Windowscopy cas-distribution\src\main\resources\tomcat\setenv.bat <TOMCAT_INSTALLATION_DIR>\bin\
-
Copy server.xml (
cas-distribution/src/main/resources/tomcat/conf) to<TOMCAT_INSTALLATION_DIR>/confLinux$ cp cas-distribution/src/main/resources/tomcat/conf/server.xml <TOMCAT_INSTALLATION_DIR>/conf/
Windows$ cp cas-distribution/src/main/resources/tomcat/conf/server.xml <TOMCAT_INSTALLATION_DIR>/conf/
-
The above files point to
<TOMCAT_INSTALLATION_DIR>/certs/keystore.jksas the default keystore location to use. This file does not come with Tomcat and needs to be created or the files copied above (setenv.sh and server.xml) need to be modified to point to the correct keystore.mkdir <TOMCAT_INSTALLATION_DIR>/certs
Copy casKeystore.jks from the DDF installation directory into <TOMCAT_INSTALLATION_DIR>/certs/. This will allow CAS to use a "cas" private key and to trust anything signed by "server", "ca", or "ca-root". Linux Expand source Windows Expand source
-
-
Start Tomcat.
$ cd <TOMCAT_INSTALLATION_DIR>/bin/ $ ./startup.sh
Make sure to run startup.bat instead of startup.sh if Windows is running on a Window machine. If setenv.sh was not converted to a .bat above, startup.bat will not function correctly.
If the Tomcat log has can exception like the following, or if you cannot access cas via port 8443 after completing the steps below:
SEVERE: Failed to initialize end point associated with ProtocolHandler ["http-apr-8443"] java.lang.Exception: Connector attribute SSLCertificateFile must be defined when using SSL with APR
uncomment the following in server.xml:
1
<Listener className="org.apache.catalina.security.SecurityListener" />
then comment out:
1
<Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" />
-
Deploy the DDF CAS WAR to Tomcat.
-
Obtain the CAS WAR by building it from cas-distribution.
-
Copy it into the webapps folder on Tomcat:
$ cp cas-distribution/target/cas.war <TOMCAT_INSTALLATION_DIR>/webapps/
-
CAS should now be running on the Tomcat server. To verify it started without issues, check the Tomcat log and look for lines similar to the following:
Apr 25, 2013 10:55:39 AM org.apache.catalina.startup.HostConfig deployWAR INFO: Deploying web application archive /apache-tomcat-7.0.39/webapps/cas.war 2013-04-25 10:55:42,831 INFO [org.jasig.cas.services.DefaultServicesManagerImpl] - <Loaded 1 services.> 2013-04-25 10:55:43,540 INFO [org.jasig.cas.util.AutowiringSchedulerFactoryBean] - <Starting Quartz Scheduler now>
CAS will try to authenticate first with X.509 (using the keystore provided as the truststore) and failover to LDAP username/password.
The DDF distribution of CAS is configured to use the embedded DDF instance running on localhost. Configuring the LDAP location may be performed by modifying the bottom of the cas.properties file located in TOMCAT/webapps/cas/WEB-INF/ after the web application is deployed.
Configure an Existing CAS Installation
If upgrading an existing CAS installation or using the standard CAS web application, refer to the Configure CAS for LDAP page or the Configure CAS for X509 User Certificates page for directions on specific configurations that need to be performed.
|
As part of setting up the server, it is critical to make sure that Tomcat trusts the DDF server certificate and that DDF trusts the certificate from Tomcat. If this is not done correctly, CAS and/or DDF will throw certificate warnings in their logs and will not allow access. |
Configure CAS for DDF
When configuring CAS to integrate with DDF, there are two main configurations that need to be modified. By default, DDF uses 'server' as the hostname for the local DDF instance and 'cas' as the hostname for the CAS server.
CAS Client
The CAS client bundle contains CAS client code that can be used by other bundles when validating and retrieving tickets from CAS. This bundle is extensively used when performing authentication.
When setting up DDF, the 'Server Name' and 'Proxy Callback URL' must be set to the hostname of the local DDF instance.
The 'CAS Server URL' configuration should point to the hostname of the CAS server and should match the SSL certificate that it is using.
CAS Token Validator
The 'CAS Server URL' configuration should point to the hostname of the CAS server and should match the SSL certificate that it is using.
Additional Configuration
Information on each of the CAS-specific bundles that come with DDF, as well as their configurations, can be found on the Security CAS application page.
Example Workflow
The following is a sample workflow hat shows how CAS integrates within the DDF WSS Implementation.
-
User points browser to DDF Query Page.
-
CAS servlet filters are invoked during request.
-
Assuming a user is not already signed in, the user is redirected to CAS login page.
-
For X.509 authentication, CAS will try to obtain a certificate from the browser. Most browsers will prompt the user to select a valid certificate to use.
-
For username/password authentication, CAS will display a login page.
-
-
After successful sign-in, the user is redirected back to DDF Query page.
-
DDF Query Page obtains the Service Ticket sent from CAS, gets a Proxy Granting Ticket (PGT), and uses that to create a Proxy Ticket for the STS.
-
The user fills in search phrase and selects Search.
-
The Security API uses the incoming CAS proxy ticket to create a RequestSecurityToken call to the STS.
-
The STS validates the proxy ticket to CAS and creates SAML assertion.
-
The Security API returns a Subject class that contains the SAML assertion.
-
The Query Page creates a new QueryRequest and adds the Subject into the properties map.
From step 10 forward, the message is completely decoupled from CAS and will proceed through the framework properly using the SAML assertion that was created in step 8.
Configuring CAS for LDAP
Install and Configure LDAP
DDF comes with an embedded LDAP instance that can be used for testing. During internal testing this LDAP was used extensively.
More information on configuring the LDAP and a list of users and attributes can be found at the Embedded LDAP Configuration page.
Add cas-server-support-ldap-3.3.1_1.jar to CAS
Copy thirdparty/cas-server-support-ldap-3.3.1/target/cas-server-support-x509-3.3.1_1.jar to {ozone-widget-framework}/apache-tomecat-{version}/webapps/cas/WEB-INF/lib/cas-server-support-ldap-3.3.1_1.jar.
Add spring-ldap-1.2.1_1.jar to CAS
Copy thirdparty/spring-ldap-1.2.1/target/spring-ldap-1.2.1_1.jar to {ozone-widget-framework}/apache-tomecat-{version}/webapps/cas/WEB-INF/lib/spring-ldap-1.2.1_1.jar.
Modify developerConfigContext.xml
-
In
{ozone-widget-framework}/apache-tomecat-{version}/webapps/cas/WEB-INF/deployerConfigContext.xml, add theFastBindLdapAuthenticationHandlerbean definition to the<list>in the property stanza with nameauthenticationHandlersof the bean stanza with idauthenticationManager:deployerConfigContext.xml1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
<bean id="authenticationManager" class="org.jasig.cas.authentication.AuthenticationManagerImpl"> <!-- other property definitions --> <property name="authenticationHandlers"> <list> <bean class="org.jasig.cas.adaptors.ldap.FastBindLdapAuthenticationHandler" > <property name="filter" value="uid=%u,ou=users,dc=example,dc=com" /> <property name="contextSource" ref="contextSource" /> </bean> <!-- other bean definitions --> </list> </property> </bean>
-
In
{ozone-widget-framework}/apache-tomecat-{version}/webapps/cas/WEB-INF/deployerConfigContext.xml, remove the bean stanza with classozone3.cas.adaptors.UserPropertiesFileAuthenticationHandlerfrom the<list>of the property stanza with nameauthenticationHandlers. -
In
{ozone-widget-framework}/apache-tomecat-{version}/webapps/cas/WEB-INF/deployerConfigContext.xml, add the contextSource bean stanza to the beans stanza:deployerConfigContext.xml1 2 3 4 5 6 7 8 9
<bean id="contextSource" class="org.jasig.cas.adaptors.ldap.util.AuthenticatedLdapContextSource"> <property name="urls"> <list> <value>ldap://localhost:1389</value> </list> </property> <property name="userDn" value="uid=admin,ou=system"/> <property name="password" value="secret"/> </bean>
Configure Ozone
Ozone is also set up to work in LDAP. This section is a reference when Ozone is being used in conjunction with CAS. The following settings were used for internal testing and should only be used as a reference.
-
Modify OWFsecurityContext.xml
-
In {ozone-widget-framework}/apache-tomecat-{version}/lib/OWFsecurityContext.xml, change the sec:x509 stanza to the following:
OWFsecurityContext.xml<sec:x509 subject-principal-regex="CN=(.*?)," user-service-ref="ldapUserService" /> -
In {ozone-widget-framework}/apache-tomecat-{version}/lib/OWFsecurityContext.xml, remove the following import:
OWFsecurityContext.xml<import resource="ozone-security-beans/UserServiceBeans.xml" /> -
In
{ozone-widget-framework}/apache-tomecat-{version}/lib/OWFsecurityContext.xml, add the following import:OWFsecurityContext.xml<import resource="ozone-security-beans/LdapBeans.xml" />
-
-
Modify LdapBeans.xml
-
In {ozone-widget-framework}/apache-tomecat-{version}/lib/ozone-security-beans/LdapBeans.xml, change the bean stanza with id
contextSourceto the following:LdapBeans.xml1 2 3 4
<bean id="contextSource" class="org.springframework.security.ldap.DefaultSpringSecurityContextSource"> <!-- The URL of the ldap server, along with the base path that all other ldap path will be relative to --> <constructor-arg value="ldap://localhost:1389/dc=example,dc=com"/> </bean>
-
In {ozone-widget-framework}/apache-tomecat-{version}/lib/ozone-security-beans/LdapBeans.xml, change the bean stanza with id
authoritiesPopulatorto the following:LdapBeans.xml1 2 3 4 5
<bean id="authoritiesPopulator" class="org.springframework.security.ldap.userdetails.DefaultLdapAuthoritiesPopulator"> <constructor-arg ref="contextSource"/> <!-- search base for determining what roles a user has --> <constructor-arg value="ou=roles"/> </bean>
-
In {ozone-widget-framework}/apache-tomecat-{version}/lib/ozone-security-beans/LdapBeans.xml, change the bean stanza with id
ldapUserSearchto the following:LdapBeans.xml1 2 3 4 5 6 7
<bean id="ldapUserSearch" class="org.springframework.security.ldap.search.FilterBasedLdapUserSearch"> <!-- search base for finding User records --> <constructor-arg value="ou=users" /> <constructor-arg value="(uid={0})" /> <!-- filter applied to entities under the search base in order to find a given user. this default searches for an entity with a matching uid --> <constructor-arg ref="contextSource" /> </bean>
-
In {ozone-widget-framework}/apache-tomecat-{version}/lib/ozone-security-beans/LdapBeans.xml, change the bean stanza with id
userDetailsMapperto the following:LdapBeans.xml1 2 3 4 5 6 7
<bean id="userDetailsMapper" class="ozone.securitysample.authentication.ldap.OWFUserDetailsContextMapper"> <constructor-arg ref="contextSource" /> <!-- search base for finding OWF group membership --> <constructor-arg value="ou=groups" /> <constructor-arg value="(member={0})" /> <!-- filter that matches only groups that have the given username listed as a "member" attribute --> </bean>
-
-
Modify OWFCASBeans.xml
-
In {ozone-widget-framework}/apache-tomecat-{version}/lib/ozone-security-beans/OWFCasBeans.xml, change the bean stanza with id
casAuthenticationProviderto the following:OWFCasBeans.xml1 2 3 4 5 6
<bean id="casAuthenticationProvider" class="org.springframework.security.cas.authentication.CasAuthenticationProvider"> <property name="userDetailsService" ref="ldapUserService" /> <property name="serviceProperties" ref="serviceProperties" /> <property name="ticketValidator" ref="ticketValidator" /> <property name="key" value="an_id_for_this_auth_provider_only" /> </bean>
-
Configuring CAS for X509 User Certificates
The follow settings were tested with CAS version 3.3.1. If any issues occur while configuring for newer versions, check the External Links section at the bottom of this page for the CAS documentation, which explains setting up certification authentication.
Add the cas-server-support-x509-3.3.1.jar to CAS
Copy thirdparty/cas-server-support-x509-3.3.1/target/cas-server-support-x509-3.3.1.jar to apache-tomecat-{version}/webapps/cas/WEB-INF/lib/cas-server-support-x509-3.3.1.jar.
Configure Web Flow
-
In apache-tomcat-{version}/webapps/cas/WEB-INF/login-workflow.xml, make the following modifications:
-
Remove the XML comments around the action-state stanza with id
startAuthenticate.startAuthenticate1 2 3 4 5
<action-state id="startAuthenticate"> <action bean="x509Check" /> <transition on="success" to="sendTicketGrantingTicket" /> <transition on="error" to="viewLoginForm" /> </action-state>
-
Modify the decision-state stanza with id
renewRequestCheckas follows.renewRequestCheck1 2 3
<decision-state id="renewRequestCheck"> <if test="{externalContext.requestParameterMap['renew'] != '' && externalContext.requestParameterMap['renew'] != null}" then="startAuthenticate" else="generateServiceTicket" /> </decision-state>
-
Modify the decision-state stanza with id
gatewayRequestCheckas follows.gatewayRequestCheck1 2 3
<decision-state id="gatewayRequestCheck"> <if test="{externalContext.requestParameterMap['gateway'] != '' && externalContext.requestParameterMap['gateway'] != null && flowScope.service != null}" then="redirect" else="startAuthenticate" /> </decision-state>
-
-
In
apache-tomcat-{version}/webapps/cas/WEB-INF/cas-servlet.xmlmake the following modifications:-
Define the x509Check bean.
x509Check1 2 3 4 5 6
<bean id="x509Check" p:centralAuthenticationService-ref="centralAuthenticationService" class="org.jasig.cas.adaptors.x509.web.flow.X509CertificateCredentialsNonInteractiveAction" > <property name="centralAuthenticationService" ref="centralAuthenticationService"/> </bean>
-
Configure the Authentication Handler
In apache-tomcat-{version}/webapps/cas/WEB-INF/deployerConfigContext.xml, make the following modifications:
-
In the
liststanza of the property stanza with nameauthenticationHandlersof the bean stanza with idauthenticationManager, add theX509CredentialAuthenticationHanderbean definition.X509CredentialAuthenticationHander1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
<bean id="authenticationManager" class="org.jasig.cas.authentication.AuthenticationManagerImpl"> <!-- Other property definitions --> <property name="authenticationHandlers"> <list> <!-- Other bean definitions --> <bean class="org.jasig.cas.adaptors.x509.authentication.handler.support.X509CredentialsAuthenticationHandler"> <property name="trustedIssuerDnPattern" value=".*" /> <!-- <property name="maxPathLength" value="3" /> <property name="checkKeyUsage" value="true" /> <property name="requireKeyUsage" value="true" /> --> </bean> </list> </property> </bean>
Configure the Credentials to the Principal Resolver
In apache-tomcat-{version}/webapps/cas/WEB-INF/deployerConfigContext.xml, make the following modifications:
-
In the list stanza of the property stanza with name
credentialsToPrincipalResolverof the bean stanza with idAuthenticationManager, add theX509CertificateCredentialsToIdentifierPrincipalResolverbean definition. The pattern in the value attribute on the property stanza can be modified to suit your needs. The following is a simple example that uses the first CN field in the DN as the Principal.X509CertificateCredentialsToIdentifierPrincipalResolver1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
<bean id="authenticationManager" class="org.jasig.cas.authentication.AuthenticationManagerImpl"> <property name="credentialsToPrincipalResolvers"> <list> <!-- Other bean definitions --> <bean class="org.jasig.cas.adaptors.x509.authentication.principal.X509CertificateCredentialsToIdentifierPrincipalResolver"> <property name="identifier" value="$OU $CN" /> </bean> </list> </property> <!-- Other property definitions --> </bean>
-
In addition to the PrincipalResolver mentioned above, CAS comes with other resolvers that can return different representations of the user identifier. This list was obtained from the official CAS Documentation site linked at the bottom of this page.
Resolver Class Identifier Output X509CertificateCredentialsToDistinguishedNamePrincipalResolver
Retrieve the complete distinguished name and use that as the identifier.
X509CertificateCredentialsToIdentifierPrincipalResolver
Transform some subset of the identifier into the ID for the principal.
X509CertificateCredentialsToSerialNumberPrincipalResolver
Use the unique serial number of the certificate.
X509CertificateCredentialsToSerialNumberAndIssuerDNPrincipalResolver
Create a most-likely globally unique reference to this certificate as a DN-like entry, using the CA name and the unique serial number of the certificate for that CA.
Different resolvers should be used depending on the use-case for the server. When performance external attribute lookup (e.g., attribute lookup via DIAS) it is necessary to have CAS return the full DN as the identifier and the class X509CertificateCredentialsToDistinguishedNamePrincipalResolver should be used. When using a local LDAP, however, the X509CertificateCredentialsToIdentifierPrincipalResolver class can be used to only return the username that maps directly to the LDAP username.
Default Certificates
To verify certificate authentication with the default CAS files you must make sure that the included testUser and testAdmin certificates are installed into your web browser. This has only been tested to work with Firefox. These certificates were provided in the Ozone Widget Framework and can be used in development environments.
-
The sample certificate for testUser1 is {ozone-widget-framework}/apache-tomcat-{version}/certs/testUser1.p12
-
password: password
-
-
The sample certificate for testAdmin1 is {ozone-widget-framework}/apache-tomcat-{version}/certs/testAdmin1.p12
-
password: password
-
External Links
For more information on CAS configuration options and what each setting means, refer to their documentation page: https://wiki.jasig.org/display/CASUM/X.509+Certificates.
Certificate Management
DDF uses certificates in two ways:
-
To transmit and receive encrypted messages.
-
To perform authentication of an incoming user request. This page details general management operations of using certificates in DDF.
Default Certificates
DDF comes with a default keystore that contains certificates. The keystore is used for different services and the certificate contained within it is aliased to "localhost".
| Alias | Keystore | Truststore | Configuration Location | Usage |
|---|---|---|---|---|
localhost |
serverKeystore.jks |
serverTruststore.jks |
File: etc/org.ops4j.pax.web.cfg File: etc/ws-security/server/encryption.properties File: etc/ws-security/server/signature.properties File: etc/ws-security/issuer/encryption.properties File: etc/ws-security/issuer/signature.properties File: etc/system.properties |
Used to secure (SSL) all of the endpoints for DDF, to perform outgoing SSL requests, sign STS SAML assertions. This also includes the admin console and any other web service that is hosted by DDF. |
File Management
File management includes creating and configuring the files that contain the certificates. In DDF, these files are generally Java Keystores (jks) and Certificate Revocation Lists (crl). This includes commands and tools that can be used to perform these operations.
The following tools are used:
-
openssl
-
Windows users can use: openssl for windows (https://code.google.com/p/openssl-for-windows/downloads/detail?name=openssl-0.9.8k_X64.zip&can=2&q=)
-
-
The standard Java keytool certificate management utility (http://docs.oracle.com/javase/7/docs/technotes/tools/windows/keytool.html) Portecle (http://portecle.sourceforge.net/) can be used for keytool operations if a GUI if preferred over a command line interface
General Certificates
Create a CA Key and Certificate
The following steps define the procedure for creating a root CA to sign certificates.
-
Create a key pair.
$> openssl genrsa -aes128 -out root-ca.key 1024 -
Use the key to sign the CA certificate.
$> openssl req -new -x509 -days 3650 -key root-ca.key -out root-ca.crt
Use the CA to Sign Certificates
The following steps define the procedure for signing a certificate for the tokenissuer user by a CA.
-
Generate a private key and a Certificate Signing Request (CSR).
$> openssl req -newkey rsa:1024 -keyout tokenissuer.key -out tokenissuer.req -
Sign the certificate by the CA.
$> openssl ca -out tokenissuer.crt -infiles tokenissuer.req
Java Keystore (JKS)
Create a New Keystore/Truststore with an Existing Certificate and Private Key
-
Using the private key, certificate, and CA certificate, create a new keystore containing the data from the new files.
cat client.crt >> client.key openssl pkcs12 -export -in client.key -out client.p12 keytool -importkeystore -srckeystore client.p12 -destkeystore clientKeystore.jks -srcstoretype pkcs12 -alias 1 keytool -changealias -alias 1 -destalias client -keystore clientKeystore.jks keytool -importcert -file ca.crt -keystore clientKeystore.jks -alias "ca" keytool -importcert -file ca-root.crt -keystore clientKeystore.jks -alias "ca-root" -
Create the truststore using just the CA certificate. Based on the concept of CA signing, the CA should be the only entry needed in the truststore.
keytool -import -trustcacerts -alias "ca" -file ca.crt -keystore truststore.jks keytool -import -trustcacerts -alias "ca-root" -file ca-root.crt -keystore truststore.jks
-
Create a PEM file using the certificate, as it is the format that some applications use.
openssl x509 -in client.crt -out client.der -outform DER openssl x509 -in client.der -inform DER -out client.pem -outform PEM
Import into a Java Keystore (JKS)
The following steps define the procedure for importing a PKCS12 keystore generated by openssl into a Java keystore (JKS).
-
Put the private key and the certificate into one file.
$> cat tokenissuer.crt >> tokenissuer.key -
Put the private key and the certificate in a PKCS12 keystore.
$> openssl pkcs12 -export -in tokenissuer.key -out tokenissuer.p12 -
Import the PKCS12 keystore into a JKS.
$> keytool -importkeystore -srckeystore tokenissuer.p12 -destkeystore stsKeystore.jks -srcstoretype pkcs12 -alias 1 -
Change the alias.
$> keytool -changealias -alias 1 -destalias tokenissuer
Certificate Revocation List (CRL)
Create a Certificate Revocation List (CRL)
-
Using the CA create in the above steps, create a CRL in which the tokenissuer’s certificate is valid.
$> openssl ca -gencrl -out crl-tokenissuer-valid.pem
Revoke a Certificate and Create a New CRL that Contains the Revoked Certificate
$> openssl ca -revoke tokenissuer.crt $> openssl ca -gencrl -out crl-tokenissuer-revoked.pem
View a CRL
-
Use the following command to view the serial numbers of the revoked certificates:
$> openssl crl -inform PEM -text -noout -in crl-tokenissuer-revoked.pem
Configuration Management
Configuration management includes configuring DDF to use existing certificates and defining configuration options for the system. This includes configuration certificate revocation and keystores.
Certificate Revocation Configuration
Enable Revocation
|
Enabling CRL revocation or modifying the CRL file will require a restart of DDF to apply updates. |
-
Place the CRL in
<ddf.home>/etc/keystores. -
Uncomment the following line in
<ddf.home>/etc/ws-security/server/encryption.propertiesand replace the file name with the CRL file used in step 1.
#org.apache.ws.security.crypto.merlin.x509crl.file=etc/keystores/crlTokenissuerValid.pem
Uncommenting this property will also enable CRL revocation for any context policy implementing PKI authentication. For example, adding an authentication policy in the Web Context Policy Manager of "/search=PKI|ANON" will disable basic authentication, and require a certificate for the search UI. If a certificate is not in the CRL, it will be allowed through, otherwise it will get a 401 error. Not providing a cert will pass it to the anonymous handler and the user will be granted anonymous access.
Disable Revocation
|
Disabling CRL revocation or modifying the CRL file will require a restart of DDF to apply updates. |
-
Comment out the following line in
<ddf.home>/etc/ws-security/server/encryption.properties.
#org.apache.ws.security.crypto.merlin.x509crl.file=etc/keystores/crlTokenissuerValid.pem
The PKIHandler will not check the CRL if this property is not defined.
Add Revocation to a Web Context
The PKIHandler implements CRL revocation, so any web context that is configured to use PKI authentication will also use CRL revocation if revocation is enabled.
-
After enabling revocation (see above), open the Web Context Policy manager.
-
Add or modify a Web Context to use PKI in authentication. For example, enabling CRL for the search ui endpoint would require adding an authorization policy of
/search=PKI -
If anonymous access is required, add "ANON" to the policy. Ex,
/search=PKI|ANON.
With anonymous access, a user with a revoked cert will be given a 401 error, but users without a certificate will be able to access the web context as the anonymous user.
|
Disabling or enabling CRL revocation or modifying the CRL file will require a restart of DDF to apply updates. If CRL checking is already enabled, adding a new context via the Web Context Policy Manager will not require a restart. |
Add Revocation to a New Endpoint
|
This section explains how to add CXF’s CRL revocation method to an endpoint and not the CRL revocation method in the PKIHandler described above. |
This guide assumes that the endpoint being created uses CXF and is being started via Blueprint from inside the OSGi container. If other tools are being used, the configuration may differ. The CXF WS-Security page (http://cxf.apache.org/docs/ws-securitypolicy.html) contains additional information and samples.
-
Add the following property to the jaxws endpoint in the endpoint’s blueprint.xml:
<entry key="ws-security.enableRevocation" value="true"/>Example xml snippet for the jaxws:endpoint with the property:
<jaxws:endpoint id="Test" implementor="#testImpl"
wsdlLocation="classpath:META-INF/wsdl/TestService.wsdl"
address="/TestService">
<jaxws:properties>
<entry key="ws-security.enableRevocation" value="true"/>
</jaxws:properties>
</jaxws:endpoint>
Verify Revocation Is Taking Place
A warning similar to the following will be displayed in the logs of the source and endpoint showing the exception encountered during certificate validation:
11:48:00,016 | WARN | tp2085517656-302 | WSS4JInInterceptor | ecurity.wss4j.WSS4JInInterceptor 330 | 164 - org.apache.cxf.cxf-rt-ws-security - 2.7.3 |
org.apache.ws.security.WSSecurityException: General security error (Error during certificate path validation: Certificate has been revoked, reason: unspecified)
at org.apache.ws.security.components.crypto.Merlin.verifyTrust(Merlin.java:838)[161:org.apache.ws.security.wss4j:1.6.9]
at org.apache.ws.security.validate.SignatureTrustValidator.verifyTrustInCert(SignatureTrustValidator.java:213)[161:org.apache.ws.security.wss4j:1.6.9]
at org.apache.ws.security.validate.SignatureTrustValidator.validate(SignatureTrustValidator.java:72)[161:org.apache.ws.security.wss4j:1.6.9]
at org.apache.ws.security.validate.SamlAssertionValidator.verifySignedAssertion(SamlAssertionValidator.java:121)[161:org.apache.ws.security.wss4j:1.6.9]
at org.apache.ws.security.validate.SamlAssertionValidator.validate(SamlAssertionValidator.java:100)[161:org.apache.ws.security.wss4j:1.6.9]
at org.apache.ws.security.processor.SAMLTokenProcessor.handleSAMLToken(SAMLTokenProcessor.java:188)[161:org.apache.ws.security.wss4j:1.6.9]
at org.apache.ws.security.processor.SAMLTokenProcessor.handleToken(SAMLTokenProcessor.java:78)[161:org.apache.ws.security.wss4j:1.6.9]
at org.apache.ws.security.WSSecurityEngine.processSecurityHeader(WSSecurityEngine.java:396)[161:org.apache.ws.security.wss4j:1.6.9]
at org.apache.cxf.ws.security.wss4j.WSS4JInInterceptor.handleMessage(WSS4JInInterceptor.java:274)[164:org.apache.cxf.cxf-rt-ws-security:2.7.3]
at org.apache.cxf.ws.security.wss4j.WSS4JInInterceptor.handleMessage(WSS4JInInterceptor.java:93)[164:org.apache.cxf.cxf-rt-ws-security:2.7.3]
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:271)[123:org.apache.cxf.cxf-api:2.7.3]
at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121)[123:org.apache.cxf.cxf-api:2.7.3]
at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:239)[130:org.apache.cxf.cxf-rt-transports-http:2.7.3]
at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:218)[130:org.apache.cxf.cxf-rt-transports-http:2.7.3]
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:198)[130:org.apache.cxf.cxf-rt-transports-http:2.7.3]
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:137)[130:org.apache.cxf.cxf-rt-transports-http:2.7.3]
at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:158)[130:org.apache.cxf.cxf-rt-transports-http:2.7.3]
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:243)[130:org.apache.cxf.cxf-rt-transports-http:2.7.3]
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doPost(AbstractHTTPServlet.java:163)[130:org.apache.cxf.cxf-rt-transports-http:2.7.3]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:713)[52:org.apache.geronimo.specs.geronimo-servlet_2.5_spec:1.1.2]
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:219)[130:org.apache.cxf.cxf-rt-transports-http:2.7.3]
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:547)[63:org.eclipse.jetty.servlet:7.5.4.v20111024]
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:480)[63:org.eclipse.jetty.servlet:7.5.4.v20111024]
at org.ops4j.pax.web.service.jetty.internal.HttpServiceServletHandler.doHandle(HttpServiceServletHandler.java:70)[73:org.ops4j.pax.web.pax-web-jetty:1.0.11]
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119)[61:org.eclipse.jetty.server:7.5.4.v20111024]
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:520)[62:org.eclipse.jetty.security:7.5.4.v20111024]
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:227)[61:org.eclipse.jetty.server:7.5.4.v20111024]
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:941)[61:org.eclipse.jetty.server:7.5.4.v20111024]
at org.ops4j.pax.web.service.jetty.internal.HttpServiceContext.doHandle(HttpServiceContext.java:117)[73:org.ops4j.pax.web.pax-web-jetty:1.0.11]
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:409)[63:org.eclipse.jetty.servlet:7.5.4.v20111024]
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:186)[61:org.eclipse.jetty.server:7.5.4.v20111024]
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:875)[61:org.eclipse.jetty.server:7.5.4.v20111024]
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117)[61:org.eclipse.jetty.server:7.5.4.v20111024]
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:149)[61:org.eclipse.jetty.server:7.5.4.v20111024]
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:110)[61:org.eclipse.jetty.server:7.5.4.v20111024]
at org.eclipse.jetty.server.Server.handle(Server.java:349)[61:org.eclipse.jetty.server:7.5.4.v20111024]
at org.eclipse.jetty.server.HttpConnection.handleRequest(HttpConnection.java:441)[61:org.eclipse.jetty.server:7.5.4.v20111024]
at org.eclipse.jetty.server.HttpConnection$RequestHandler.content(HttpConnection.java:936)[61:org.eclipse.jetty.server:7.5.4.v20111024]
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:893)[57:org.eclipse.jetty.http:7.5.4.v20111024]
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:218)[57:org.eclipse.jetty.http:7.5.4.v20111024]
at org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:50)[61:org.eclipse.jetty.server:7.5.4.v20111024]
at org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:245)[61:org.eclipse.jetty.server:7.5.4.v20111024]
at org.eclipse.jetty.server.ssl.SslSocketConnector$SslConnectorEndPoint.run(SslSocketConnector.java:663)[61:org.eclipse.jetty.server:7.5.4.v20111024]
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:598)[55:org.eclipse.jetty.util:7.5.4.v20111024]
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:533)[55:org.eclipse.jetty.util:7.5.4.v20111024]
at java.lang.Thread.run(Thread.java:662)[:1.6.0_33]
Caused by: java.security.cert.CertPathValidatorException: Certificate has been revoked, reason: unspecified
at sun.security.provider.certpath.PKIXMasterCertPathValidator.validate(PKIXMasterCertPathValidator.java:139)[:1.6.0_33]
at sun.security.provider.certpath.PKIXCertPathValidator.doValidate(PKIXCertPathValidator.java:330)[:1.6.0_33]
at sun.security.provider.certpath.PKIXCertPathValidator.engineValidate(PKIXCertPathValidator.java:178)[:1.6.0_33]
at java.security.cert.CertPathValidator.validate(CertPathValidator.java:250)[:1.6.0_33]
at org.apache.ws.security.components.crypto.Merlin.verifyTrust(Merlin.java:814)[161:org.apache.ws.security.wss4j:1.6.9]
... 45 more
Encryption Service
Encryption Command
An encrypt security command is provided with DDF that allows plain text to be encrypted. This is useful when displaying password fields in a GUI.
Below is an example of the security:encrypt command used to encrypt the plain text "myPasswordToEncrypt". The output, bR9mJpDVo8bTRwqGwIFxHJ5yFJzatKwjXjIo/8USWm8=, is the encrypted value.
ddf@local>security:encrypt myPasswordToEncrypt bR9mJpDVo8bTRwqGwIFxHJ5yFJzatKwjXjIo/8USWm8=
Redaction and Filtering
Redaction and filtering are performed in a Post Query plugin that occurs after a query has been performed.
How Redaction and Filtering Works
Each metacard result will contain security attributes that are pulled from the metadata record after being processed by a PostQueryPlugin that populates this attribute. The security attribute is a HashMap containing a set of keys that map to lists of values. The metacard is then processed by a filter/redaction plugin that creates a KeyValueCollectionPermission from the metacard’s security attribute. This permission is then checked against the user subject to determine if the subject has the correct claims to view that metacard. The decision to filter/redact* the metacard eventually relies on the installed PDP (features:install security-pdp-java OR features:install security-pdp-xacml). The PDP that is being used returns a decision, and the metacard will either be filtered/redacted or allowed to pass through.
|
The default setting is to redact records. |
The security attributes populated on the metacard are completely dependent on the type of the metacard. Each type of metacard must have its own PostQueryPlugin that reads the metadata being returned and populates the metacard’s security attribute. If the subject or resource (metacard) permissions are missing during redaction, that resource is redacted.
Example (represented as simple XML for ease of understanding):
1
2
3
4
5
6
7
8
9
10
<metacard>
<security>
<map>
<entry key="entry1" value="A,B" />
<entry key="entry2" value="X,Y" />
<entry key="entry3" value="USA,GBR" />
<entry key="entry4" value="USA,AUS" />
</map>
</security>
</metacard>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<user>
<claim name="claim1">
<value>A</value>
<value>B</value>
</claim>
<claim name="claim2">
<value>X</value>
<value>Y</value>
</claim>
<claim name="claim3">
<value>USA</value>
</claim>
<claim name="claim4">
<value>USA</value>
</claim>
</user>
In the above example, the user’s claims are represented very simply and are similar to how they would actually appear in a SAML 2 assertion. Each of these user (or subject) claims will be converted to a KeyValuePermission object. These permission objects will be implied against the permission object generated from the metacard record. In this particular case, the metacard might be allowed if the policy is configured appropriately because all of the permissions line up correctly.
Redaction Policies
The procedure for setting up a policy differs depending on which PDP implementation installed. The security-pdp-java implementation is the simplest PDP to use, so it will be covered here.
-
Open the https://localhost:8993/system/console/configuration.*
-
Click on the Authz Security Settings configuration.
-
Add any roles that are allowed to access protected services.
-
Add any SOAP actions that are not to be protected by the PDP.
-
Add any attribute mappings necessary to map between subject claims and metacard values.
-
For example, the above example would require two Match All mappings of claim1=entry1 and claim2=entry2
-
Match One mappings would contain claim3=entry3 and claim4=entry4.
-
|
See the Security PDP AuthZ Realm section of this documentation for a description of the configuration page. |
With the security-pdp-java feature configured in this way, the above Metacard would be displayed to the user.
The XACML PDP is explained in more detail in the XACML Policy Decision Point (PDP) section of this documentation. It the administrator’s responsibility to write a XACML policy capable of returning the correct response message. The Java-based PDP should perform adequately in most situations. It is possible to install the security-pdp-java and security-pdp-xacml features at the same time. The system could be configured in this way in order to allow the Java PDP to handle most cases and only have XACML policies to handle more complex situations than what the Java PDP is designed for. Keep in mind that this would be a very complex configuration with both PDPs installed, and this should only be performed if you understand the complex details.
Redact a New Type of Metacard
To enable redaction/filtering on a new type of record, implement a PostQueryPlugin that is able to read the string metadata contained within the metacard record. The plugin must set the security attribute to a map of list of values extracted from the metacard. Note that in DDF, there is no default plugin that populates the security attribute on the metacard. A plugin must be created to populate these fields in order for redaction/filtering to work correctly.
Example redacted record:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
<?xml version="1.0" encoding="UTF-8"?>
<metacard xmlns="urn:catalog:metacard" xmlns:ns2="http://www.opengis.net/gml" xmlns:ns3="http://www.w3.org/1999/xlink" xmlns:ns4="http://www.w3.org/2001/SMIL20/" xmlns:ns5="http://www.w3.org/2001/SMIL20/Language" ns2:id="99f494b22f4341e9a3ba3ef6a5fe8734">
<type>ddf.metacard</type>
<source>ddf.distribution</source>
<stringxml name="metadata">
<value>
<meta:Resource xmlns:meta="...">
<meta:identifier meta:qualifier="http://metadata/noaccess" meta:value="REDACTED"/>
<meta:title>REDACTED</meta:title>
<meta:creator>
<meta:Organization>
<meta:name>REDACTED</meta:name>
</meta:Organization>
</meta:creator>
<meta:subjectCoverage>
<meta:Subject>
<meta:category meta:label="REDACTED"/>
</meta:Subject>
</meta:subjectCoverage>
<meta:security SEC:controls="A B" SEC:group="X" SEC:origin="USA"/>
</meta:Resource>
</value>
</stringxml>
<string name="resource-uri">
<value>catalog://metadata/noaccess</value>
</string>
<string name="title">
<value>REDACTED</value>
</string>
<string name="resource-size">
<value>REDACTED</value>
</string>
</metacard>
Security Token Service
The Security Token Service (STS) is a service running in DDF that allows clients to request SAML v2.0 assertions. These assertions are then used to authenticate a client allowing them to issue other requests, such as ingests or queries to DDF services.
The STS is an extension of Apache CXF-STS. It is a SOAP web service that utilizes WS-Security policies. The generated SAML assertions contain attributes about a user and is used by the Policy Enforcement Point (PEP) in the secure endpoints. Specific configuration details on the bundles that come with DDF can be found on the Security STS application page. This page details all of the STS components that come out of the box with DDF, along with configuration options, installation help, and which services they import and export.
Using the Security Token Service (STS)
Once installed, the STS can be used to request SAML v2.0 assertions via a SOAP web service request. Out of the box, the STS supports authentication from existing SAML tokens, CAS proxy tickets, username/password, and x509 certificates. It also supports retrieving claims using LDAP.
Standalone Installation
The STS cannot currently be installed on a kernel distribution of DDF. To run a STS-only DDF installation, uninstall the catalog components that are not being used. The following list displays the features that can be uninstalled to minimize the runtime size of DDF in an STS-only mode. This list is not a comprehensive list of every feature that can be uninstalled; it is a list of the larger components that can be uninstalled without impacting the STS functionality.
-
catalog-core-standardframework
-
catalog-solr-embedded-provider
-
catalog-opensearch-endpoint
-
catalog-opensearch-souce
-
catalog-rest-endpoint
STS Claims Handlers
Claims handlers are classes that convert the incoming user credentials into a set of attribute claims that will be populated in the SAML assertion. An example in action would be the LDAPClaimsHandler that takes in the user’s credentials and retrieves the user’s attributes from a backend LDAP server. These attributes are then mapped and added to the SAML assertion being created. Integrators and developers can add moreclaims handlers that can handle other types of external services that store user attributes.
Add a Custom Claims Handler
Description
A claim is an additional piece of data about a principal that can be included in a token along with basic token data. A claims manager provides hooks for a developer to plug in claims handlers to ensure that the STS includes the specified claims in the issued token.
Motivation
A developer may want to add a custom claims handler to retrieve attributes from an external attribute store.
Steps
The following steps define the procedure for adding a custom claims handler to the STS.
-
The new claims handler must implement the org.apache.cxf.sts.claims.ClaimsHander interface.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34
/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.cxf.sts.claims; import java.net.URI; import java.util.List; /** * This interface provides a pluggable way to handle Claims. */ public interface ClaimsHandler { List<URI> getSupportedClaimTypes(); ClaimCollection retrieveClaimValues(RequestClaimCollection claims, ClaimsParameters parameters); }
-
Expose the new claims handler as an OSGi service under the
org.apache.cxf.sts.claims.ClaimsHandlerinterface.1 2 3 4 5 6 7 8
<?xml version="1.0" encoding="UTF-8"?> <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"> <bean id="CustomClaimsHandler" class="security.sts.claimsHandler.CustomClaimsHandler" /> <service ref="customClaimsHandler" interface="org.apache.cxf.sts.claims.ClaimsHandler"/> </blueprint>
-
Deploy the bundle.
If the new claims handler is hitting an external service that is secured with SSL, a developer may have to add the root CA of the external site to the DDF trustStore and add a valid certificate into the DDF keyStore. Doing so will allow the SSL to encrypt messages that will be accepted by the external service. For more information on certificates, refer to the Configuring a Java Keystore for Secure Communications page.
STS WS-Trust WSDL Document
|
This XML file is found inside of the STS bundle and is named ws-trust-1.4-service.wsdl. |
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
<?xml version="1.0" encoding="UTF-8"?>
<wsdl:definitions xmlns:tns="http://docs.oasis-open.org/ws-sx/ws-trust/200512/" xmlns:wstrust="http://docs.oasis-open.org/ws-sx/ws-trust/200512/" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:wsap10="http://www.w3.org/2006/05/addressing/wsdl" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsp="http://www.w3.org/ns/ws-policy" xmlns:wst="http://docs.oasis-open.org/ws-sx/ws-trust/200512" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:wsam="http://www.w3.org/2007/05/addressing/metadata" targetNamespace="http://docs.oasis-open.org/ws-sx/ws-trust/200512/">
<wsdl:types>
<xs:schema elementFormDefault="qualified" targetNamespace="http://docs.oasis-open.org/ws-sx/ws-trust/200512">
<xs:element name="RequestSecurityToken" type="wst:AbstractRequestSecurityTokenType"/>
<xs:element name="RequestSecurityTokenResponse" type="wst:AbstractRequestSecurityTokenType"/>
<xs:complexType name="AbstractRequestSecurityTokenType">
<xs:sequence>
<xs:any namespace="##any" processContents="lax" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
<xs:attribute name="Context" type="xs:anyURI" use="optional"/>
<xs:anyAttribute namespace="##other" processContents="lax"/>
</xs:complexType>
<xs:element name="RequestSecurityTokenCollection" type="wst:RequestSecurityTokenCollectionType"/>
<xs:complexType name="RequestSecurityTokenCollectionType">
<xs:sequence>
<xs:element name="RequestSecurityToken" type="wst:AbstractRequestSecurityTokenType" minOccurs="2" maxOccurs="unbounded"/>
</xs:sequence>
</xs:complexType>
<xs:element name="RequestSecurityTokenResponseCollection" type="wst:RequestSecurityTokenResponseCollectionType"/>
<xs:complexType name="RequestSecurityTokenResponseCollectionType">
<xs:sequence>
<xs:element ref="wst:RequestSecurityTokenResponse" minOccurs="1" maxOccurs="unbounded"/>
</xs:sequence>
<xs:anyAttribute namespace="##other" processContents="lax"/>
</xs:complexType>
</xs:schema>
</wsdl:types>
<!-- WS-Trust defines the following GEDs -->
<wsdl:message name="RequestSecurityTokenMsg">
<wsdl:part name="request" element="wst:RequestSecurityToken"/>
</wsdl:message>
<wsdl:message name="RequestSecurityTokenResponseMsg">
<wsdl:part name="response" element="wst:RequestSecurityTokenResponse"/>
</wsdl:message>
<wsdl:message name="RequestSecurityTokenCollectionMsg">
<wsdl:part name="requestCollection" element="wst:RequestSecurityTokenCollection"/>
</wsdl:message>
<wsdl:message name="RequestSecurityTokenResponseCollectionMsg">
<wsdl:part name="responseCollection" element="wst:RequestSecurityTokenResponseCollection"/>
</wsdl:message>
<!-- This portType an example of a Requestor (or other) endpoint that
Accepts SOAP-based challenges from a Security Token Service -->
<wsdl:portType name="WSSecurityRequestor">
<wsdl:operation name="Challenge">
<wsdl:input message="tns:RequestSecurityTokenResponseMsg"/>
<wsdl:output message="tns:RequestSecurityTokenResponseMsg"/>
</wsdl:operation>
</wsdl:portType>
<!-- This portType is an example of an STS supporting full protocol -->
<wsdl:portType name="STS">
<wsdl:operation name="Cancel">
<wsdl:input wsam:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Cancel" message="tns:RequestSecurityTokenMsg"/>
<wsdl:output wsam:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTR/CancelFinal" message="tns:RequestSecurityTokenResponseMsg"/>
</wsdl:operation>
<wsdl:operation name="Issue">
<wsdl:input wsam:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Issue" message="tns:RequestSecurityTokenMsg"/>
<wsdl:output wsam:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTRC/IssueFinal" message="tns:RequestSecurityTokenResponseCollectionMsg"/>
</wsdl:operation>
<wsdl:operation name="Renew">
<wsdl:input wsam:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Renew" message="tns:RequestSecurityTokenMsg"/>
<wsdl:output wsam:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTR/RenewFinal" message="tns:RequestSecurityTokenResponseMsg"/>
</wsdl:operation>
<wsdl:operation name="Validate">
<wsdl:input wsam:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Validate" message="tns:RequestSecurityTokenMsg"/>
<wsdl:output wsam:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTR/ValidateFinal" message="tns:RequestSecurityTokenResponseMsg"/>
</wsdl:operation>
<wsdl:operation name="KeyExchangeToken">
<wsdl:input wsam:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/KET" message="tns:RequestSecurityTokenMsg"/>
<wsdl:output wsam:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTR/KETFinal" message="tns:RequestSecurityTokenResponseMsg"/>
</wsdl:operation>
<wsdl:operation name="RequestCollection">
<wsdl:input message="tns:RequestSecurityTokenCollectionMsg"/>
<wsdl:output message="tns:RequestSecurityTokenResponseCollectionMsg"/>
</wsdl:operation>
</wsdl:portType>
<!-- This portType is an example of an endpoint that accepts
Unsolicited RequestSecurityTokenResponse messages -->
<wsdl:portType name="SecurityTokenResponseService">
<wsdl:operation name="RequestSecurityTokenResponse">
<wsdl:input message="tns:RequestSecurityTokenResponseMsg"/>
</wsdl:operation>
</wsdl:portType>
<wsdl:binding name="STS_Binding" type="wstrust:STS">
<wsp:PolicyReference URI="#STS_policy"/>
<soap:binding style="document" transport="http://schemas.xmlsoap.org/soap/http"/>
<wsdl:operation name="Issue">
<soap:operation soapAction="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Issue"/>
<wsdl:input>
<soap:body use="literal"/>
</wsdl:input>
<wsdl:output>
<soap:body use="literal"/>
</wsdl:output>
</wsdl:operation>
<wsdl:operation name="Validate">
<soap:operation soapAction="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Validate"/>
<wsdl:input>
<soap:body use="literal"/>
</wsdl:input>
<wsdl:output>
<soap:body use="literal"/>
</wsdl:output>
</wsdl:operation>
<wsdl:operation name="Cancel">
<soap:operation soapAction="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Cancel"/>
<wsdl:input>
<soap:body use="literal"/>
</wsdl:input>
<wsdl:output>
<soap:body use="literal"/>
</wsdl:output>
</wsdl:operation>
<wsdl:operation name="Renew">
<soap:operation soapAction="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Renew"/>
<wsdl:input>
<soap:body use="literal"/>
</wsdl:input>
<wsdl:output>
<soap:body use="literal"/>
</wsdl:output>
</wsdl:operation>
<wsdl:operation name="KeyExchangeToken">
<soap:operation soapAction="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/KeyExchangeToken"/>
<wsdl:input>
<soap:body use="literal"/>
</wsdl:input>
<wsdl:output>
<soap:body use="literal"/>
</wsdl:output>
</wsdl:operation>
<wsdl:operation name="RequestCollection">
<soap:operation soapAction="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/RequestCollection"/>
<wsdl:input>
<soap:body use="literal"/>
</wsdl:input>
<wsdl:output>
<soap:body use="literal"/>
</wsdl:output>
</wsdl:operation>
</wsdl:binding>
<wsp:Policy wsu:Id="STS_policy">
<wsp:ExactlyOne>
<wsp:All>
<wsap10:UsingAddressing/>
<wsp:ExactlyOne>
<sp:TransportBinding xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702">
<wsp:Policy>
<sp:TransportToken>
<wsp:Policy>
<sp:HttpsToken>
<wsp:Policy/>
</sp:HttpsToken>
</wsp:Policy>
</sp:TransportToken>
<sp:AlgorithmSuite>
<wsp:Policy>
<sp:Basic128/>
</wsp:Policy>
</sp:AlgorithmSuite>
<sp:Layout>
<wsp:Policy>
<sp:Lax/>
</wsp:Policy>
</sp:Layout>
<sp:IncludeTimestamp/>
</wsp:Policy>
</sp:TransportBinding>
</wsp:ExactlyOne>
<sp:Wss11 xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702">
<wsp:Policy>
<sp:MustSupportRefKeyIdentifier/>
<sp:MustSupportRefIssuerSerial/>
<sp:MustSupportRefThumbprint/>
<sp:MustSupportRefEncryptedKey/>
</wsp:Policy>
</sp:Wss11>
<sp:Trust13 xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702">
<wsp:Policy>
<sp:MustSupportIssuedTokens/>
<sp:RequireClientEntropy/>
<sp:RequireServerEntropy/>
</wsp:Policy>
</sp:Trust13>
</wsp:All>
</wsp:ExactlyOne>
</wsp:Policy>
<wsp:Policy wsu:Id="Input_policy">
<wsp:ExactlyOne>
<wsp:All>
<sp:SignedParts xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702">
<sp:Body/>
<sp:Header Name="To" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="From" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="FaultTo" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="ReplyTo" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="MessageID" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="RelatesTo" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="Action" Namespace="http://www.w3.org/2005/08/addressing"/>
</sp:SignedParts>
<sp:EncryptedParts xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702">
<sp:Body/>
</sp:EncryptedParts>
</wsp:All>
</wsp:ExactlyOne>
</wsp:Policy>
<wsp:Policy wsu:Id="Output_policy">
<wsp:ExactlyOne>
<wsp:All>
<sp:SignedParts xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702">
<sp:Body/>
<sp:Header Name="To" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="From" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="FaultTo" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="ReplyTo" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="MessageID" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="RelatesTo" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="Action" Namespace="http://www.w3.org/2005/08/addressing"/>
</sp:SignedParts>
<sp:EncryptedParts xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702">
<sp:Body/>
</sp:EncryptedParts>
</wsp:All>
</wsp:ExactlyOne>
</wsp:Policy>
<wsdl:service name="SecurityTokenService">
<wsdl:port name="STS_Port" binding="tns:STS_Binding">
<soap:address location="https://localhost:8993/services/SecurityTokenService"/>
</wsdl:port>
</wsdl:service>
</wsdl:definitions>
Example Request and Responses for a SAML Assertion
A client performs a RequestSecurityToken operation against the STS to receive a SAML assertion. The DDF STS offers several different ways to request a SAML assertion. For help in understanding the various request and response formats, samples have been provided. The samples are divided out into different request token types.
Most endpoints that have been used in DDF require the X.509 PublicKey SAML assertion.
BinarySecurityToken (CAS) SAML Security Token Request/Response
BinarySecurityToken (CAS) Sample Request/Response
Request
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Header>
<Action xmlns="http://www.w3.org/2005/08/addressing">http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Issue</Action>
<MessageID xmlns="http://www.w3.org/2005/08/addressing">urn:uuid:60652909-faca-4e4a-a4a7-8a5ce243a7cb</MessageID>
<To xmlns="http://www.w3.org/2005/08/addressing">https://server:8993/services/SecurityTokenService</To>
<ReplyTo xmlns="http://www.w3.org/2005/08/addressing">
<Address>http://www.w3.org/2005/08/addressing/anonymous</Address>
</ReplyTo>
<wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" soap:mustUnderstand="1">
<wsu:Timestamp wsu:Id="TS-1">
<wsu:Created>2013-04-29T18:35:10.688Z</wsu:Created>
<wsu:Expires>2013-04-29T18:40:10.688Z</wsu:Expires>
</wsu:Timestamp>
</wsse:Security>
</soap:Header>
<soap:Body>
<wst:RequestSecurityToken xmlns:wst="http://docs.oasis-open.org/ws-sx/ws-trust/200512">
<wst:RequestType>http://docs.oasis-open.org/ws-sx/ws-trust/200512/Issue</wst:RequestType>
<wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy">
<wsa:EndpointReference xmlns:wsa="http://www.w3.org/2005/08/addressing">
<wsa:Address>https://server:8993/services/SecurityTokenService</wsa:Address>
</wsa:EndpointReference>
</wsp:AppliesTo>
<wst:Claims xmlns:ic="http://schemas.xmlsoap.org/ws/2005/05/identity" xmlns:wst="http://docs.oasis-open.org/ws-sx/ws-trust/200512" Dialect="http://schemas.xmlsoap.org/ws/2005/05/identity">
<ic:ClaimType xmlns:ic="http://schemas.xmlsoap.org/ws/2005/05/identity" Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier"/>
<ic:ClaimType xmlns:ic="http://schemas.xmlsoap.org/ws/2005/05/identity" Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress"/>
<ic:ClaimType xmlns:ic="http://schemas.xmlsoap.org/ws/2005/05/identity" Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname"/>
<ic:ClaimType xmlns:ic="http://schemas.xmlsoap.org/ws/2005/05/identity" Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname"/>
<ic:ClaimType xmlns:ic="http://schemas.xmlsoap.org/ws/2005/05/identity" Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role"/>
</wst:Claims>
<wst:OnBehalfOf>
<BinarySecurityToken ValueType="#CAS" EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" ns1:Id="CAS" xmlns="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:ns1="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">U1QtMTQtYUtmcDYxcFRtS0FxZG1pVDMzOWMtY2FzfGh0dHBzOi8vdG9rZW5pc3N1ZXI6ODk5My9zZXJ2aWNlcy9TZWN1cml0eVRva2VuU2VydmljZQ==</BinarySecurityToken>
</wst:OnBehalfOf>
<wst:TokenType>http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0</wst:TokenType>
<wst:KeyType>http://docs.oasis-open.org/ws-sx/ws-trust/200512/PublicKey</wst:KeyType>
<wst:UseKey>
<ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:X509Data>
<ds:X509Certificate>
MIIC5DCCAk2gAwIBAgIJAKj7ROPHjo1yMA0GCSqGSIb3DQEBCwUAMIGKMQswCQYDVQQGEwJVUzEQ
MA4GA1UECAwHQXJpem9uYTERMA8GA1UEBwwIR29vZHllYXIxGDAWBgNVBAoMD0xvY2toZWVkIE1h
cnRpbjENMAsGA1UECwwESTRDRTEPMA0GA1UEAwwGY2xpZW50MRwwGgYJKoZIhvcNAQkBFg1pNGNl
QGxtY28uY29tMB4XDTEyMDYyMDE5NDMwOVoXDTIyMDYxODE5NDMwOVowgYoxCzAJBgNVBAYTAlVT
MRAwDgYDVQQIDAdBcml6b25hMREwDwYDVQQHDAhHb29keWVhcjEYMBYGA1UECgwPTG9ja2hlZWQg
TWFydGluMQ0wCwYDVQQLDARJNENFMQ8wDQYDVQQDDAZjbGllbnQxHDAaBgkqhkiG9w0BCQEWDWk0
Y2VAbG1jby5jb20wgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAIpHxCBLYE7xfDLcITS9SsPG
4Q04Z6S32/+TriGsRgpGTj/7GuMG7oJ98m6Ws5cTYl7nyunyHTkZuP7rBzy4esDIHheyx18EgdSJ
vvACgGVCnEmHndkf9bWUlAOfNaxW+vZwljUkRUVdkhPbPdPwOcMdKg/SsLSNjZfsQIjoWd4rAgMB
AAGjUDBOMB0GA1UdDgQWBBQx11VLtYXLvFGpFdHnhlNW9+lxBDAfBgNVHSMEGDAWgBQx11VLtYXL
vFGpFdHnhlNW9+lxBDAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBCwUAA4GBAHYs2OI0K6yVXzyS
sKcv2fmfw6XCICGTnyA7BOdAjYoqq6wD+33dHJUCFDqye7AWdcivuc7RWJt9jnlfJZKIm2BHcDTR
Hhk6CvjJ14Gf40WQdeMHoX8U8b0diq7Iy5Ravx+zRg7SdiyJUqFYjRh/O5tywXRT1+freI3bwAN0
L6tQ
</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</wst:UseKey>
<wst:Renewing/>
</wst:RequestSecurityToken>
</soap:Body>
</soap:Envelope>
Response
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Header>
<Action xmlns="http://www.w3.org/2005/08/addressing">http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTRC/IssueFinal</Action>
<MessageID xmlns="http://www.w3.org/2005/08/addressing">urn:uuid:7a6fde04-9013-41ef-b08b-0689ffa9c93e</MessageID>
<To xmlns="http://www.w3.org/2005/08/addressing">http://www.w3.org/2005/08/addressing/anonymous</To>
<RelatesTo xmlns="http://www.w3.org/2005/08/addressing">urn:uuid:60652909-faca-4e4a-a4a7-8a5ce243a7cb</RelatesTo>
<wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" soap:mustUnderstand="1">
<wsu:Timestamp wsu:Id="TS-2">
<wsu:Created>2013-04-29T18:35:11.459Z</wsu:Created>
<wsu:Expires>2013-04-29T18:40:11.459Z</wsu:Expires>
</wsu:Timestamp>
</wsse:Security>
</soap:Header>
<soap:Body>
<RequestSecurityTokenResponseCollection xmlns="http://docs.oasis-open.org/ws-sx/ws-trust/200512" xmlns:ns2="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:ns3="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:ns4="http://www.w3.org/2005/08/addressing" xmlns:ns5="http://docs.oasis-open.org/ws-sx/ws-trust/200802">
<RequestSecurityTokenResponse>
<TokenType>http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0</TokenType>
<RequestedSecurityToken>
<saml2:Assertion xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ID="_BDC44EB8593F47D1B213672605113671" IssueInstant="2013-04-29T18:35:11.370Z" Version="2.0" xsi:type="saml2:AssertionType">
<saml2:Issuer>tokenissuer</saml2:Issuer>
<ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:SignedInfo>
<ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
<ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/>
<ds:Reference URI="#_BDC44EB8593F47D1B213672605113671">
<ds:Transforms>
<ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>
<ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#">
<ec:InclusiveNamespaces xmlns:ec="http://www.w3.org/2001/10/xml-exc-c14n#" PrefixList="xs"/>
</ds:Transform>
</ds:Transforms>
<ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/>
<ds:DigestValue>6wnWbft6Pz5XOF5Q9AG59gcGwLY=</ds:DigestValue>
</ds:Reference>
</ds:SignedInfo>
<ds:SignatureValue>h+NvkgXGdQtca3/eKebhAKgG38tHp3i2n5uLLy8xXXIg02qyKgEP0FCowp2LiYlsQU9YjKfSwCUbH3WR6jhbAv9zj29CE+ePfEny7MeXvgNl3wId+vcHqti/DGGhhgtO2Mbx/tyX1BhHQUwKRlcHajxHeecwmvV7D85NMdV48tI=</ds:SignatureValue>
<ds:KeyInfo>
<ds:X509Data>
<ds:X509Certificate>MIIDmjCCAwOgAwIBAgIBBDANBgkqhkiG9w0BAQQFADB1MQswCQYDVQQGEwJVUzEQMA4GA1UECBMH
QXJpem9uYTERMA8GA1UEBxMIR29vZHllYXIxEDAOBgNVBAoTB0V4YW1wbGUxEDAOBgNVBAoTB0V4
YW1wbGUxEDAOBgNVBAsTB0V4YW1wbGUxCzAJBgNVBAMTAkNBMB4XDTEzMDQwOTE4MzcxMVoXDTIz
MDQwNzE4MzcxMVowgaYxCzAJBgNVBAYTAlVTMRAwDgYDVQQIEwdBcml6b25hMREwDwYDVQQHEwhH
b29keWVhcjEQMA4GA1UEChMHRXhhbXBsZTEQMA4GA1UEChMHRXhhbXBsZTEQMA4GA1UECxMHRXhh
bXBsZTEUMBIGA1UEAxMLdG9rZW5pc3N1ZXIxJjAkBgkqhkiG9w0BCQEWF3Rva2VuaXNzdWVyQGV4
YW1wbGUuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDDfktpA8Lrp9rTfRibKdgtxtN9
uB44diiIqq3JOzDGfDhGLu6mjpuHO1hrKItv42hBOhhmH7lS9ipiaQCIpVfgIG63MB7fa5dBrfGF
G69vFrU1Lfi7IvsVVsNrtAEQljOMmw9sxS3SUsRQX+bD8jq7Uj1hpoF7DdqpV8Kb0COOGwIDAQAB
o4IBBjCCAQIwCQYDVR0TBAIwADAsBglghkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2Vy
dGlmaWNhdGUwHQYDVR0OBBYEFD1mHviop2Tc4HaNu8yPXR6GqWP1MIGnBgNVHSMEgZ8wgZyAFBcn
en6/j05DzaVwORwrteKc7TZOoXmkdzB1MQswCQYDVQQGEwJVUzEQMA4GA1UECBMHQXJpem9uYTER
MA8GA1UEBxMIR29vZHllYXIxEDAOBgNVBAoTB0V4YW1wbGUxEDAOBgNVBAoTB0V4YW1wbGUxEDAO
BgNVBAsTB0V4YW1wbGUxCzAJBgNVBAMTAkNBggkAwXk7OcwO7gwwDQYJKoZIhvcNAQEEBQADgYEA
PiTX5kYXwdhmijutSkrObKpRbQkvkkzcyZlO6VrAxRQ+eFeN6NyuyhgYy5K6l/sIWdaGou5iJOQx
2pQYWx1v8Klyl0W22IfEAXYv/epiO89hpdACryuDJpioXI/X8TAwvRwLKL21Dk3k2b+eyCgA0O++
HM0dPfiQLQ99ElWkv/0=</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</ds:Signature>
<saml2:Subject>
<saml2:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified" NameQualifier="http://cxf.apache.org/sts">srogers</saml2:NameID>
<saml2:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:holder-of-key">
<saml2:SubjectConfirmationData xsi:type="saml2:KeyInfoConfirmationDataType">
<ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:X509Data>
<ds:X509Certificate>MIIC5DCCAk2gAwIBAgIJAKj7ROPHjo1yMA0GCSqGSIb3DQEBCwUAMIGKMQswCQYDVQQGEwJVUzEQ
MA4GA1UECAwHQXJpem9uYTERMA8GA1UEBwwIR29vZHllYXIxGDAWBgNVBAoMD0xvY2toZWVkIE1h
cnRpbjENMAsGA1UECwwESTRDRTEPMA0GA1UEAwwGY2xpZW50MRwwGgYJKoZIhvcNAQkBFg1pNGNl
QGxtY28uY29tMB4XDTEyMDYyMDE5NDMwOVoXDTIyMDYxODE5NDMwOVowgYoxCzAJBgNVBAYTAlVT
MRAwDgYDVQQIDAdBcml6b25hMREwDwYDVQQHDAhHb29keWVhcjEYMBYGA1UECgwPTG9ja2hlZWQg
TWFydGluMQ0wCwYDVQQLDARJNENFMQ8wDQYDVQQDDAZjbGllbnQxHDAaBgkqhkiG9w0BCQEWDWk0
Y2VAbG1jby5jb20wgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAIpHxCBLYE7xfDLcITS9SsPG
4Q04Z6S32/+TriGsRgpGTj/7GuMG7oJ98m6Ws5cTYl7nyunyHTkZuP7rBzy4esDIHheyx18EgdSJ
vvACgGVCnEmHndkf9bWUlAOfNaxW+vZwljUkRUVdkhPbPdPwOcMdKg/SsLSNjZfsQIjoWd4rAgMB
AAGjUDBOMB0GA1UdDgQWBBQx11VLtYXLvFGpFdHnhlNW9+lxBDAfBgNVHSMEGDAWgBQx11VLtYXL
vFGpFdHnhlNW9+lxBDAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBCwUAA4GBAHYs2OI0K6yVXzyS
sKcv2fmfw6XCICGTnyA7BOdAjYoqq6wD+33dHJUCFDqye7AWdcivuc7RWJt9jnlfJZKIm2BHcDTR
Hhk6CvjJ14Gf40WQdeMHoX8U8b0diq7Iy5Ravx+zRg7SdiyJUqFYjRh/O5tywXRT1+freI3bwAN0
L6tQ</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</saml2:SubjectConfirmationData>
</saml2:SubjectConfirmation>
</saml2:Subject>
<saml2:Conditions NotBefore="2013-04-29T18:35:11.407Z" NotOnOrAfter="2013-04-29T19:05:11.407Z">
<saml2:AudienceRestriction>
<saml2:Audience>https://server:8993/services/SecurityTokenService</saml2:Audience>
</saml2:AudienceRestriction>
</saml2:Conditions>
<saml2:AuthnStatement AuthnInstant="2013-04-29T18:35:11.392Z">
<saml2:AuthnContext>
<saml2:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:unspecified</saml2:AuthnContextClassRef>
</saml2:AuthnContext>
</saml2:AuthnStatement>
<saml2:AttributeStatement>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">srogers</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">srogers@example.com</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">srogers</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">Steve Rogers</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">avengers</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">admin</saml2:AttributeValue>
</saml2:Attribute>
</saml2:AttributeStatement>
</saml2:Assertion>
</RequestedSecurityToken>
<RequestedAttachedReference>
<ns3:SecurityTokenReference xmlns:wsse11="http://docs.oasis-open.org/wss/oasis-wss-wssecurity-secext-1.1.xsd" wsse11:TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0">
<ns3:KeyIdentifier ValueType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLID">_BDC44EB8593F47D1B213672605113671</ns3:KeyIdentifier>
</ns3:SecurityTokenReference>
</RequestedAttachedReference>
<RequestedUnattachedReference>
<ns3:SecurityTokenReference xmlns:wsse11="http://docs.oasis-open.org/wss/oasis-wss-wssecurity-secext-1.1.xsd" wsse11:TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0">
<ns3:KeyIdentifier ValueType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLID">_BDC44EB8593F47D1B213672605113671</ns3:KeyIdentifier>
</ns3:SecurityTokenReference>
</RequestedUnattachedReference>
<wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy" xmlns:wst="http://docs.oasis-open.org/ws-sx/ws-trust/200512">
<wsa:EndpointReference xmlns:wsa="http://www.w3.org/2005/08/addressing">
<wsa:Address>https://server:8993/services/SecurityTokenService</wsa:Address>
</wsa:EndpointReference>
</wsp:AppliesTo>
<Lifetime>
<ns2:Created>2013-04-29T18:35:11.444Z</ns2:Created>
<ns2:Expires>2013-04-29T19:05:11.444Z</ns2:Expires>
</Lifetime>
</RequestSecurityTokenResponse>
</RequestSecurityTokenResponseCollection>
</soap:Body>
</soap:Envelope>
UsernameToken Bearer SAML Security Token Request/Response
To obtain a SAML assertion to use in secure communication to DDF, a RequestSecurityToken (RST) request has to be made to the STS.
A Bearer SAML assertion is automatically trusted by the endpoint. The client doesn’t have to prove it can own that SAML assertion. It is the simplest way to request a SAML assertion, but many endpoints won’t accept a KeyType of Bearer.
Request
Explanation
-
WS-Addressing header with Action, To, and Message ID
-
Valid, non-expired timestamp
-
Username Token containing a username and password that the STS will authenticate
-
Issued over HTTPS
-
KeyType of http://docs.oasis-open.org/ws-sx/ws-trust/200512/Bearer
-
Claims (optional): Some endpoints may require that the SAML assertion include attributes of the user, such as an authenticated user’s role, name identifier, email address, etc. If the SAML assertion needs those attributes, the RequestSecurityToken must specify which ones to include.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Header xmlns:wsa="http://www.w3.org/2005/08/addressing">
<wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" soap:mustUnderstand="1">
<wsu:Timestamp wsu:Id="TS-1">
<wsu:Created>2013-04-29T17:47:37.817Z</wsu:Created>
<wsu:Expires>2013-04-29T17:57:37.817Z</wsu:Expires>
</wsu:Timestamp>
<wsse:UsernameToken wsu:Id="UsernameToken-1">
<wsse:Username>srogers</wsse:Username>
<wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">password1</wsse:Password>
</wsse:UsernameToken>
</wsse:Security>
<wsa:Action>http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Issue</wsa:Action>
<wsa:MessageID>uuid:a1bba87b-0f00-46cc-975f-001391658cbe</wsa:MessageID>
<wsa:To>https://server:8993/services/SecurityTokenService</wsa:To>
</soap:Header>
<soap:Body>
<wst:RequestSecurityToken xmlns:wst="http://docs.oasis-open.org/ws-sx/ws-trust/200512">
<wst:SecondaryParameters>
<t:TokenType xmlns:t="http://docs.oasis-open.org/ws-sx/ws-trust/200512">http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0</t:TokenType>
<t:KeyType xmlns:t="http://docs.oasis-open.org/ws-sx/ws-trust/200512">http://docs.oasis-open.org/ws-sx/ws-trust/200512/Bearer</t:KeyType>
<t:Claims xmlns:ic="http://schemas.xmlsoap.org/ws/2005/05/identity" xmlns:t="http://docs.oasis-open.org/ws-sx/ws-trust/200512" Dialect="http://schemas.xmlsoap.org/ws/2005/05/identity">
<!--Add any additional claims you want to grab for the service-->
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/uid"/>
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role"/>
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier"/>
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress"/>
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname"/>
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname"/>
</t:Claims>
</wst:SecondaryParameters>
<wst:RequestType>http://docs.oasis-open.org/ws-sx/ws-trust/200512/Issue</wst:RequestType>
<wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy">
<wsa:EndpointReference xmlns:wsa="http://www.w3.org/2005/08/addressing">
<wsa:Address>https://server:8993/services/QueryService</wsa:Address>
</wsa:EndpointReference>
</wsp:AppliesTo>
<wst:Renewing/>
</wst:RequestSecurityToken>
</soap:Body>
</soap:Envelope>
Response
This is the response from the STS containing the SAML assertion to be used in subsequent requests to QCRUD endpoints:
The saml2:Assertion block contains the entire SAML assertion.
The Signature block contains a signature from the STS’s private key. The endpoint receiving the SAML assertion will verify that it trusts the signer and ensure that the message wasn’t tampered with.
The AttributeStatement block contains all the Claims requested.
The Lifetime block indicates the valid time interval in which the SAML assertion can be used.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Header>
<Action xmlns="http://www.w3.org/2005/08/addressing">http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTRC/IssueFinal</Action>
<MessageID xmlns="http://www.w3.org/2005/08/addressing">urn:uuid:eee4c6ef-ac10-4cbc-a53c-13d960e3b6e8</MessageID>
<To xmlns="http://www.w3.org/2005/08/addressing">http://www.w3.org/2005/08/addressing/anonymous</To>
<RelatesTo xmlns="http://www.w3.org/2005/08/addressing">uuid:a1bba87b-0f00-46cc-975f-001391658cbe</RelatesTo>
<wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" soap:mustUnderstand="1">
<wsu:Timestamp wsu:Id="TS-2">
<wsu:Created>2013-04-29T17:49:12.624Z</wsu:Created>
<wsu:Expires>2013-04-29T17:54:12.624Z</wsu:Expires>
</wsu:Timestamp>
</wsse:Security>
</soap:Header>
<soap:Body>
<RequestSecurityTokenResponseCollection xmlns="http://docs.oasis-open.org/ws-sx/ws-trust/200512" xmlns:ns2="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:ns3="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:ns4="http://www.w3.org/2005/08/addressing" xmlns:ns5="http://docs.oasis-open.org/ws-sx/ws-trust/200802">
<RequestSecurityTokenResponse>
<TokenType>http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0</TokenType>
<RequestedSecurityToken>
<saml2:Assertion xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ID="_7437C1A55F19AFF22113672577526132" IssueInstant="2013-04-29T17:49:12.613Z" Version="2.0" xsi:type="saml2:AssertionType">
<saml2:Issuer>tokenissuer</saml2:Issuer>
<ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:SignedInfo>
<ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
<ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/>
<ds:Reference URI="#_7437C1A55F19AFF22113672577526132">
<ds:Transforms>
<ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>
<ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#">
<ec:InclusiveNamespaces xmlns:ec="http://www.w3.org/2001/10/xml-exc-c14n#" PrefixList="xs"/>
</ds:Transform>
</ds:Transforms>
<ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/>
<ds:DigestValue>ReOqEbGZlyplW5kqiynXOjPnVEA=</ds:DigestValue>
</ds:Reference>
</ds:SignedInfo>
<ds:SignatureValue>X5Kzd54PrKIlGVV2XxzCmWFRzHRoybF7hU6zxbEhSLMR0AWS9R7Me3epq91XqeOwvIDDbwmE/oJNC7vI0fIw/rqXkx4aZsY5a5nbAs7f+aXF9TGdk82x2eNhNGYpViq0YZJfsJ5WSyMtG8w5nRekmDMy9oTLsHG+Y/OhJDEwq58=</ds:SignatureValue>
<ds:KeyInfo>
<ds:X509Data>
<ds:X509Certificate>MIIDmjCCAwOgAwIBAgIBBDANBgkqhkiG9w0BAQQFADB1MQswCQYDVQQGEwJVUzEQMA4GA1UECBMH
QXJpem9uYTERMA8GA1UEBxMIR29vZHllYXIxEDAOBgNVBAoTB0V4YW1wbGUxEDAOBgNVBAoTB0V4
YW1wbGUxEDAOBgNVBAsTB0V4YW1wbGUxCzAJBgNVBAMTAkNBMB4XDTEzMDQwOTE4MzcxMVoXDTIz
MDQwNzE4MzcxMVowgaYxCzAJBgNVBAYTAlVTMRAwDgYDVQQIEwdBcml6b25hMREwDwYDVQQHEwhH
b29keWVhcjEQMA4GA1UEChMHRXhhbXBsZTEQMA4GA1UEChMHRXhhbXBsZTEQMA4GA1UECxMHRXhh
bXBsZTEUMBIGA1UEAxMLdG9rZW5pc3N1ZXIxJjAkBgkqhkiG9w0BCQEWF3Rva2VuaXNzdWVyQGV4
YW1wbGUuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDDfktpA8Lrp9rTfRibKdgtxtN9
uB44diiIqq3JOzDGfDhGLu6mjpuHO1hrKItv42hBOhhmH7lS9ipiaQCIpVfgIG63MB7fa5dBrfGF
G69vFrU1Lfi7IvsVVsNrtAEQljOMmw9sxS3SUsRQX+bD8jq7Uj1hpoF7DdqpV8Kb0COOGwIDAQAB
o4IBBjCCAQIwCQYDVR0TBAIwADAsBglghkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2Vy
dGlmaWNhdGUwHQYDVR0OBBYEFD1mHviop2Tc4HaNu8yPXR6GqWP1MIGnBgNVHSMEgZ8wgZyAFBcn
en6/j05DzaVwORwrteKc7TZOoXmkdzB1MQswCQYDVQQGEwJVUzEQMA4GA1UECBMHQXJpem9uYTER
MA8GA1UEBxMIR29vZHllYXIxEDAOBgNVBAoTB0V4YW1wbGUxEDAOBgNVBAoTB0V4YW1wbGUxEDAO
BgNVBAsTB0V4YW1wbGUxCzAJBgNVBAMTAkNBggkAwXk7OcwO7gwwDQYJKoZIhvcNAQEEBQADgYEA
PiTX5kYXwdhmijutSkrObKpRbQkvkkzcyZlO6VrAxRQ+eFeN6NyuyhgYy5K6l/sIWdaGou5iJOQx
2pQYWx1v8Klyl0W22IfEAXYv/epiO89hpdACryuDJpioXI/X8TAwvRwLKL21Dk3k2b+eyCgA0O++
HM0dPfiQLQ99ElWkv/0=</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</ds:Signature>
<saml2:Subject>
<saml2:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified" NameQualifier="http://cxf.apache.org/sts">srogers</saml2:NameID>
<saml2:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer"/>
</saml2:Subject>
<saml2:Conditions NotBefore="2013-04-29T17:49:12.614Z" NotOnOrAfter="2013-04-29T18:19:12.614Z">
<saml2:AudienceRestriction>
<saml2:Audience>https://server:8993/services/QueryService</saml2:Audience>
</saml2:AudienceRestriction>
</saml2:Conditions>
<saml2:AuthnStatement AuthnInstant="2013-04-29T17:49:12.613Z">
<saml2:AuthnContext>
<saml2:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:unspecified</saml2:AuthnContextClassRef>
</saml2:AuthnContext>
</saml2:AuthnStatement>
<saml2:AttributeStatement>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">srogers</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">srogers@example.com</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">srogers</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">Steve Rogers</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">avengers</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">admin</saml2:AttributeValue>
</saml2:Attribute>
</saml2:AttributeStatement>
</saml2:Assertion>
</RequestedSecurityToken>
<RequestedAttachedReference>
<ns3:SecurityTokenReference xmlns:wsse11="http://docs.oasis-open.org/wss/oasis-wss-wssecurity-secext-1.1.xsd" wsse11:TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0">
<ns3:KeyIdentifier ValueType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLID">_7437C1A55F19AFF22113672577526132</ns3:KeyIdentifier>
</ns3:SecurityTokenReference>
</RequestedAttachedReference>
<RequestedUnattachedReference>
<ns3:SecurityTokenReference xmlns:wsse11="http://docs.oasis-open.org/wss/oasis-wss-wssecurity-secext-1.1.xsd" wsse11:TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0">
<ns3:KeyIdentifier ValueType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLID">_7437C1A55F19AFF22113672577526132</ns3:KeyIdentifier>
</ns3:SecurityTokenReference>
</RequestedUnattachedReference>
<wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy" xmlns:wst="http://docs.oasis-open.org/ws-sx/ws-trust/200512">
<wsa:EndpointReference xmlns:wsa="http://www.w3.org/2005/08/addressing">
<wsa:Address>https://server:8993/services/QueryService</wsa:Address>
</wsa:EndpointReference>
</wsp:AppliesTo>
<Lifetime>
<ns2:Created>2013-04-29T17:49:12.620Z</ns2:Created>
<ns2:Expires>2013-04-29T18:19:12.620Z</ns2:Expires>
</Lifetime>
</RequestSecurityTokenResponse>
</RequestSecurityTokenResponseCollection>
</soap:Body>
</soap:Envelope>
X.509 PublicKey SAML Security Token Request/Response
In order to obtain a SAML assertion to use in secure communication to DDF, a RequestSecurityToken (RST) request has to be made to the STS.
An endpoint’s policy will specify the type of security token needed. Most of the endpoints that have been used with DDF require a SAML v2.0 assertion with a required KeyType of http://docs.oasis-open.org/ws-sx/ws-trust/200512/PublicKey. This means that the SAML assertion provided by the client to a DDF endpoint must contain a SubjectConfirmation block with a type of "holder-of-key" containing the client’s public key. This is used to prove that the client can possess the SAML assertion returned by the STS.
Request
Explanation The STS that comes with DDF requires the following to be in the RequestSecurityToken request in order to issue a valid SAML assertion. See the request block below for an example of how these components should be populated. * WS-Addressing header containing Action, To, and MessageID blocks * Valid, non-expired timestamp * Issued over HTTPS * TokenType of http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0 * KeyType of http://docs.oasis-open.org/ws-sx/ws-trust/200512/PublicKey * X509 Certificate as the Proof of Possession or POP. This needs to be the certificate of the client that will be both requesting the SAML assertion and using the SAML assertion to issue a query * Claims (optional): Some endpoints may require that the SAML assertion include attributes of the user, such as an authenticated user’s role, name identifier, email address, etc. If the SAML assertion needs those attributes, the RequestSecurityToken must specify which ones to include. ** UsernameToken: If Claims are required, the RequestSecurityToken security header must contain a UsernameToken element with a username and password.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
<soapenv:Envelope xmlns:ns="http://docs.oasis-open.org/ws-sx/ws-trust/200512" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
<soapenv:Header xmlns:wsa="http://www.w3.org/2005/08/addressing">
<wsa:Action>http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Issue</wsa:Action>
<wsa:MessageID>uuid:527243af-94bd-4b5c-a1d8-024fd7e694c5</wsa:MessageID>
<wsa:To>https://server:8993/services/SecurityTokenService</wsa:To>
<wsse:Security soapenv:mustUnderstand="1" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">
<wsu:Timestamp wsu:Id="TS-17">
<wsu:Created>2014-02-19T17:30:40.771Z</wsu:Created>
<wsu:Expires>2014-02-19T19:10:40.771Z</wsu:Expires>
</wsu:Timestamp>
<!-- OPTIONAL: Only required if the endpoint that the SAML assertion will be sent to requires claims. -->
<wsse:UsernameToken wsu:Id="UsernameToken-16">
<wsse:Username>pparker</wsse:Username>
<wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">password1</wsse:Password>
<wsse:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary">LCTD+5Y7hlWIP6SpsEg9XA==</wsse:Nonce>
<wsu:Created>2014-02-19T17:30:37.355Z</wsu:Created>
</wsse:UsernameToken>
</wsse:Security>
</soapenv:Header>
<soapenv:Body>
<wst:RequestSecurityToken xmlns:wst="http://docs.oasis-open.org/ws-sx/ws-trust/200512">
<wst:TokenType>http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0</wst:TokenType>
<wst:KeyType>http://docs.oasis-open.org/ws-sx/ws-trust/200512/PublicKey</wst:KeyType>
<!-- OPTIONAL: Only required if the endpoint that the SAML assertion will be sent to requires claims. -->
<wst:Claims Dialect="http://schemas.xmlsoap.org/ws/2005/05/identity" xmlns:ic="http://schemas.xmlsoap.org/ws/2005/05/identity">
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role"/>
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier"/>
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress"/>
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname"/>
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname"/>
</wst:Claims>
<wst:RequestType>http://docs.oasis-open.org/ws-sx/ws-trust/200512/Issue</wst:RequestType>
<wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy">
<wsa:EndpointReference xmlns:wsa="http://www.w3.org/2005/08/addressing">
<wsa:Address>https://server:8993/services/QueryService</wsa:Address>
</wsa:EndpointReference>
</wsp:AppliesTo>
<wst:UseKey>
<ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:X509Data>
<ds:X509Certificate>MIIFGDCCBACgAwIBAgICJe0wDQYJKoZIhvcNAQEFBQAwXDELMAkGA1UEBhMCVVMxGDAWBgNVBAoT
D1UuUy4gR292ZXJubWVudDEMMAoGA1UECxMDRG9EMQwwCgYDVQQLEwNQS0kxFzAVBgNVBAMTDkRP
RCBKSVRDIENBLTI3MB4XDTEzMDUwNzAwMjU0OVoXDTE2MDUwNzAwMjU0OVowaTELMAkGA1UEBhMC
VVMxGDAWBgNVBAoTD1UuUy4gR292ZXJubWVudDEMMAoGA1UECxMDRG9EMQwwCgYDVQQLEwNQS0kx
EzARBgNVBAsTCkNPTlRSQUNUT1IxDzANBgNVBAMTBmNsaWVudDCCASIwDQYJKoZIhvcNAQEBBQAD
ggEPADCCAQoCggEBAOq6L1/jjZ5cyhjhHEbOHr5WQpboKACYbrsn8lg85LGNoAfcwImr9KBmOxGb
ZCxHYIhkW7pJ+kppyH8DDMviIvvdkvrAIU0l8OBRn2wReCBGQ01Imdc3+WzFF2svW75d6wii2ZVd
eMvUO15p/pAD/sdIfXmAfyu8+tqtiO8KVZGkTnlg3AMzfeSrkci5UHMVWj0qUSuzLk9SAg/9STgb
Kf2xBpHUYecWFSB+dTpdZN2pC85tj9xIoWGh5dFWG1fPcYRgzGPxsybiGOylbJ7rHDJuL7IIIyx5
EnkCuxmQwoQ6XQAhiWRGyPlY08w1LZixI2v+Cv/ZjUfIHv49I9P4Mt8CAwEAAaOCAdUwggHRMB8G
A1UdIwQYMBaAFCMUNCBNXy43NZLBBlnDjDplNZJoMB0GA1UdDgQWBBRPGiX6zZzKTqQSx/tjg6hx
9opDoTAOBgNVHQ8BAf8EBAMCBaAwgdoGA1UdHwSB0jCBzzA2oDSgMoYwaHR0cDovL2NybC5nZHMu
bml0LmRpc2EubWlsL2NybC9ET0RKSVRDQ0FfMjcuY3JsMIGUoIGRoIGOhoGLbGRhcDovL2NybC5n
ZHMubml0LmRpc2EubWlsL2NuJTNkRE9EJTIwSklUQyUyMENBLTI3JTJjb3UlM2RQS0klMmNvdSUz
ZERvRCUyY28lM2RVLlMuJTIwR292ZXJubWVudCUyY2MlM2RVUz9jZXJ0aWZpY2F0ZXJldm9jYXRp
b25saXN0O2JpbmFyeTAjBgNVHSAEHDAaMAsGCWCGSAFlAgELBTALBglghkgBZQIBCxIwfQYIKwYB
BQUHAQEEcTBvMD0GCCsGAQUFBzAChjFodHRwOi8vY3JsLmdkcy5uaXQuZGlzYS5taWwvc2lnbi9E
T0RKSVRDQ0FfMjcuY2VyMC4GCCsGAQUFBzABhiJodHRwOi8vb2NzcC5uc24wLnJjdnMubml0LmRp
c2EubWlsMA0GCSqGSIb3DQEBBQUAA4IBAQCGUJPGh4iGCbr2xCMqCq04SFQ+iaLmTIFAxZPFvup1
4E9Ir6CSDalpF9eBx9fS+Z2xuesKyM/g3YqWU1LtfWGRRIxzEujaC4YpwHuffkx9QqkwSkXXIsim
EhmzSgzxnT4Q9X8WwalqVYOfNZ6sSLZ8qPPFrLHkkw/zIFRzo62wXLu0tfcpOr+iaJBhyDRinIHr
hwtE3xo6qQRRWlO3/clC4RnTev1crFVJQVBF3yfpRu8udJ2SOGdqU0vjUSu1h7aMkYJMHIu08Whj
8KASjJBFeHPirMV1oddJ5ydZCQ+Jmnpbwq+XsCxg1LjC4dmbjKVr9s4QK+/JLNjxD8IkJiZE</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</wst:UseKey>
</wst:RequestSecurityToken>
</soapenv:Body>
</soapenv:Envelope>
Response
Explanation This is the response from the STS containing the SAML assertion to be used in subsequent requests to QCRUD endpoints.
The saml2:Assertion block contains the entire SAML assertion.
The Signature block contains a signature from the STS’s private key. The endpoint receiving the SAML assertion will verify that it trusts the signer and ensure that the message wasn’t tampered with.
The SubjectConfirmation block contains the client’s public key, so the server can verify that the client has permission to hold this SAML assertion.
The AttributeStatement block contains all of the claims requested.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Header>
<Action xmlns="http://www.w3.org/2005/08/addressing">http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTRC/IssueFinal</Action>
<MessageID xmlns="http://www.w3.org/2005/08/addressing">urn:uuid:b46c35ad-3120-4233-ae07-b9e10c7911f3</MessageID>
<To xmlns="http://www.w3.org/2005/08/addressing">http://www.w3.org/2005/08/addressing/anonymous</To>
<RelatesTo xmlns="http://www.w3.org/2005/08/addressing">uuid:527243af-94bd-4b5c-a1d8-024fd7e694c5</RelatesTo>
<wsse:Security soap:mustUnderstand="1" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">
<wsu:Timestamp wsu:Id="TS-90DBA0754E55B4FE7013928310431357">
<wsu:Created>2014-02-19T17:30:43.135Z</wsu:Created>
<wsu:Expires>2014-02-19T17:35:43.135Z</wsu:Expires>
</wsu:Timestamp>
</wsse:Security>
</soap:Header>
<soap:Body>
<ns2:RequestSecurityTokenResponseCollection xmlns="http://docs.oasis-open.org/ws-sx/ws-trust/200802" xmlns:ns2="http://docs.oasis-open.org/ws-sx/ws-trust/200512" xmlns:ns3="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:ns4="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:ns5="http://www.w3.org/2005/08/addressing">
<ns2:RequestSecurityTokenResponse>
<ns2:TokenType>http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0</ns2:TokenType>
<ns2:RequestedSecurityToken>
<saml2:Assertion ID="_90DBA0754E55B4FE7013928310431176" IssueInstant="2014-02-19T17:30:43.117Z" Version="2.0" xsi:type="saml2:AssertionType" xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<saml2:Issuer>tokenissuer</saml2:Issuer>
<ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:SignedInfo>
<ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
<ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/>
<ds:Reference URI="#_90DBA0754E55B4FE7013928310431176">
<ds:Transforms>
<ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>
<ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#">
<ec:InclusiveNamespaces PrefixList="xs" xmlns:ec="http://www.w3.org/2001/10/xml-exc-c14n#"/>
</ds:Transform>
</ds:Transforms>
<ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/>
<ds:DigestValue>/bEGqsRGHVJbx298WPmGd8I53zs=</ds:DigestValue>
</ds:Reference>
</ds:SignedInfo>
<ds:SignatureValue>
mYR7w1/dnuh8Z7t9xjCb4XkYQLshj+UuYlGOuTwDYsUPcS2qI0nAgMD1VsDP7y1fDJxeqsq7HYhFKsnqRfebMM4WLH1D/lJ4rD4UO+i9l3tuiHml7SN24WM1/bOqfDUCoDqmwG8afUJ3r4vmTNPftwOss8BZ/8ODgZzm08ndlkxDfvcN7OrExbV/3/45JwF/MMPZoqvi2MJGfX56E9fErJNuzezpWnRqPOlWPxyffKMAlVaB9zF6gvVnUqcW2k/Z8X9lN7O5jouBI281ZnIfsIPuBJERFtYNVDHsIXM1pJnrY6FlKIaOsi55LQu3Ruir/n82pU7BT5aWtxwrn7akBg== </ds:SignatureValue>
<ds:KeyInfo>
<ds:X509Data>
<ds:X509Certificate>MIIFHTCCBAWgAwIBAgICJe8wDQYJKoZIhvcNAQEFBQAwXDELMAkGA1UEBhMCVVMxGDAWBgNVBAoT
D1UuUy4gR292ZXJubWVudDEMMAoGA1UECxMDRG9EMQwwCgYDVQQLEwNQS0kxFzAVBgNVBAMTDkRP
RCBKSVRDIENBLTI3MB4XDTEzMDUwNzAwMjYzN1oXDTE2MDUwNzAwMjYzN1owbjELMAkGA1UEBhMC
VVMxGDAWBgNVBAoTD1UuUy4gR292ZXJubWVudDEMMAoGA1UECxMDRG9EMQwwCgYDVQQLEwNQS0kx
EzARBgNVBAsTCkNPTlRSQUNUT1IxFDASBgNVBAMTC3Rva2VuaXNzdWVyMIIBIjANBgkqhkiG9w0B
AQEFAAOCAQ8AMIIBCgKCAQEAx01/U4M1wG+wL1JxX2RL1glj101FkJXMk3KFt3zD//N8x/Dcwwvs
ngCQjXrV6YhbB2V7scHwnThPv3RSwYYiO62z+g6ptfBbKGGBLSZOzLe3fyJR4RxblFKsELFgPHfX
vgUHS/keG5uSRk9S/Okqps/yxKB7+ZlxeFxsIz5QywXvBpMiXtc2zF+M7BsbSIdSx5LcPcDFBwjF
c66rE3/y/25VMht9EZX1QoKr7f8rWD4xgd5J6DYMFWEcmiCz4BDJH9sftw+n1P+CYgrhwslWGqxt
cDME9t6SWR3GLT4Sdtr8ziIM5uUteEhPIV3rVC3/u23JbYEeS8mpnp0bxt5eHQIDAQABo4IB1TCC
AdEwHwYDVR0jBBgwFoAUIxQ0IE1fLjc1ksEGWcOMOmU1kmgwHQYDVR0OBBYEFGBjdkdey+bMHMhC
Z7gwiQ/mJf5VMA4GA1UdDwEB/wQEAwIFoDCB2gYDVR0fBIHSMIHPMDagNKAyhjBodHRwOi8vY3Js
Lmdkcy5uaXQuZGlzYS5taWwvY3JsL0RPREpJVENDQV8yNy5jcmwwgZSggZGggY6GgYtsZGFwOi8v
Y3JsLmdkcy5uaXQuZGlzYS5taWwvY24lM2RET0QlMjBKSVRDJTIwQ0EtMjclMmNvdSUzZFBLSSUy
Y291JTNkRG9EJTJjbyUzZFUuUy4lMjBHb3Zlcm5tZW50JTJjYyUzZFVTP2NlcnRpZmljYXRlcmV2
b2NhdGlvbmxpc3Q7YmluYXJ5MCMGA1UdIAQcMBowCwYJYIZIAWUCAQsFMAsGCWCGSAFlAgELEjB9
BggrBgEFBQcBAQRxMG8wPQYIKwYBBQUHMAKGMWh0dHA6Ly9jcmwuZ2RzLm5pdC5kaXNhLm1pbC9z
aWduL0RPREpJVENDQV8yNy5jZXIwLgYIKwYBBQUHMAGGImh0dHA6Ly9vY3NwLm5zbjAucmN2cy5u
aXQuZGlzYS5taWwwDQYJKoZIhvcNAQEFBQADggEBAIHZQTINU3bMpJ/PkwTYLWPmwCqAYgEUzSYx
bNcVY5MWD8b4XCdw5nM3GnFlOqr4IrHeyyOzsEbIebTe3bv0l1pHx0Uyj059nAhx/AP8DjVtuRU1
/Mp4b6uJ/4yaoMjIGceqBzHqhHIJinG0Y2azua7eM9hVbWZsa912ihbiupCq22mYuHFP7NUNzBvV
j03YUcsy/sES5sRx9Rops/CBN+LUUYOdJOxYWxo8oAbtF8ABE5ATLAwqz4ttsToKPUYh1sxdx5Ef
APeZ+wYDmMu4OfLckwnCKZgkEtJOxXpdIJHY+VmyZtQSB0LkR5toeH/ANV4259Ia5ZT8h2/vIJBg
6B4=</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</ds:Signature>
<saml2:Subject>
<saml2:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified" NameQualifier="http://cxf.apache.org/sts">pparker</saml2:NameID>
<saml2:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:holder-of-key">
<saml2:SubjectConfirmationData xsi:type="saml2:KeyInfoConfirmationDataType">
<ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:X509Data>
<ds:X509Certificate>MIIFGDCCBACgAwIBAgICJe0wDQYJKoZIhvcNAQEFBQAwXDELMAkGA1UEBhMCVVMxGDAWBgNVBAoT
D1UuUy4gR292ZXJubWVudDEMMAoGA1UECxMDRG9EMQwwCgYDVQQLEwNQS0kxFzAVBgNVBAMTDkRP
RCBKSVRDIENBLTI3MB4XDTEzMDUwNzAwMjU0OVoXDTE2MDUwNzAwMjU0OVowaTELMAkGA1UEBhMC
VVMxGDAWBgNVBAoTD1UuUy4gR292ZXJubWVudDEMMAoGA1UECxMDRG9EMQwwCgYDVQQLEwNQS0kx
EzARBgNVBAsTCkNPTlRSQUNUT1IxDzANBgNVBAMTBmNsaWVudDCCASIwDQYJKoZIhvcNAQEBBQAD
ggEPADCCAQoCggEBAOq6L1/jjZ5cyhjhHEbOHr5WQpboKACYbrsn8lg85LGNoAfcwImr9KBmOxGb
ZCxHYIhkW7pJ+kppyH8DDMviIvvdkvrAIU0l8OBRn2wReCBGQ01Imdc3+WzFF2svW75d6wii2ZVd
eMvUO15p/pAD/sdIfXmAfyu8+tqtiO8KVZGkTnlg3AMzfeSrkci5UHMVWj0qUSuzLk9SAg/9STgb
Kf2xBpHUYecWFSB+dTpdZN2pC85tj9xIoWGh5dFWG1fPcYRgzGPxsybiGOylbJ7rHDJuL7IIIyx5
EnkCuxmQwoQ6XQAhiWRGyPlY08w1LZixI2v+Cv/ZjUfIHv49I9P4Mt8CAwEAAaOCAdUwggHRMB8G
A1UdIwQYMBaAFCMUNCBNXy43NZLBBlnDjDplNZJoMB0GA1UdDgQWBBRPGiX6zZzKTqQSx/tjg6hx
9opDoTAOBgNVHQ8BAf8EBAMCBaAwgdoGA1UdHwSB0jCBzzA2oDSgMoYwaHR0cDovL2NybC5nZHMu
bml0LmRpc2EubWlsL2NybC9ET0RKSVRDQ0FfMjcuY3JsMIGUoIGRoIGOhoGLbGRhcDovL2NybC5n
ZHMubml0LmRpc2EubWlsL2NuJTNkRE9EJTIwSklUQyUyMENBLTI3JTJjb3UlM2RQS0klMmNvdSUz
ZERvRCUyY28lM2RVLlMuJTIwR292ZXJubWVudCUyY2MlM2RVUz9jZXJ0aWZpY2F0ZXJldm9jYXRp
b25saXN0O2JpbmFyeTAjBgNVHSAEHDAaMAsGCWCGSAFlAgELBTALBglghkgBZQIBCxIwfQYIKwYB
BQUHAQEEcTBvMD0GCCsGAQUFBzAChjFodHRwOi8vY3JsLmdkcy5uaXQuZGlzYS5taWwvc2lnbi9E
T0RKSVRDQ0FfMjcuY2VyMC4GCCsGAQUFBzABhiJodHRwOi8vb2NzcC5uc24wLnJjdnMubml0LmRp
c2EubWlsMA0GCSqGSIb3DQEBBQUAA4IBAQCGUJPGh4iGCbr2xCMqCq04SFQ+iaLmTIFAxZPFvup1
4E9Ir6CSDalpF9eBx9fS+Z2xuesKyM/g3YqWU1LtfWGRRIxzEujaC4YpwHuffkx9QqkwSkXXIsim
EhmzSgzxnT4Q9X8WwalqVYOfNZ6sSLZ8qPPFrLHkkw/zIFRzo62wXLu0tfcpOr+iaJBhyDRinIHr
hwtE3xo6qQRRWlO3/clC4RnTev1crFVJQVBF3yfpRu8udJ2SOGdqU0vjUSu1h7aMkYJMHIu08Whj
8KASjJBFeHPirMV1oddJ5ydZCQ+Jmnpbwq+XsCxg1LjC4dmbjKVr9s4QK+/JLNjxD8IkJiZE</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</saml2:SubjectConfirmationData>
</saml2:SubjectConfirmation>
</saml2:Subject>
<saml2:Conditions NotBefore="2014-02-19T17:30:43.119Z" NotOnOrAfter="2014-02-19T18:00:43.119Z"/>
<saml2:AuthnStatement AuthnInstant="2014-02-19T17:30:43.117Z">
<saml2:AuthnContext>
<saml2:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:unspecified</saml2:AuthnContextClassRef>
</saml2:AuthnContext>
</saml2:AuthnStatement>
<!-- This block will only be included if Claims were requested in the RST. -->
<saml2:AttributeStatement>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">pparker</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">pparker@example.com</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">pparker</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">Peter Parker</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">users</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">users</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">avengers</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">admin</saml2:AttributeValue>
</saml2:Attribute>
</saml2:AttributeStatement>
</saml2:Assertion>
</ns2:RequestedSecurityToken>
<ns2:RequestedAttachedReference>
<ns4:SecurityTokenReference wsse11:TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0" xmlns:wsse11="http://docs.oasis-open.org/wss/oasis-wss-wssecurity-secext-1.1.xsd">
<ns4:KeyIdentifier ValueType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLID">_90DBA0754E55B4FE7013928310431176</ns4:KeyIdentifier>
</ns4:SecurityTokenReference>
</ns2:RequestedAttachedReference>
<ns2:RequestedUnattachedReference>
<ns4:SecurityTokenReference wsse11:TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0" xmlns:wsse11="http://docs.oasis-open.org/wss/oasis-wss-wssecurity-secext-1.1.xsd">
<ns4:KeyIdentifier ValueType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLID">_90DBA0754E55B4FE7013928310431176</ns4:KeyIdentifier>
</ns4:SecurityTokenReference>
</ns2:RequestedUnattachedReference>
<ns2:Lifetime>
<ns3:Created>2014-02-19T17:30:43.119Z</ns3:Created>
<ns3:Expires>2014-02-19T18:00:43.119Z</ns3:Expires>
</ns2:Lifetime>
</ns2:RequestSecurityTokenResponse>
</ns2:RequestSecurityTokenResponseCollection>
</soap:Body>
</soap:Envelope>
XACML Policy Decision Point (PDP)
After unzipping the DDF distribution, place the desired XACML policy in the <distribution root>/etc/pdp/policies directory. This is the directory in which the PDP will look for XACML policies every 60 seconds. A sample XACML policy is located at the end of this page. Information on specific bundle configurations and names can be found on the Security PDP application page.
Creating a Policy
This document assumes familiarity with the XACML schema and does not go into detail on the XACML language. There are some DDF-specific items that need to be considered when creating a policy, so it is compatible with the XACMLRealm. When creating a policy, a target is used to indicate that a certain action should be run only for one type of request. Targets can be used on both the main policy element and any individual rules. Generally targets are geared toward the actions that are set in the request.
Actions
For DDF, these actions are populated by various components in the security API. The actions and their population location are identified in the following table.
Operation |
Action-id Value |
Component Setting the action |
Description |
Filtering / Redaction |
read |
security-pdp-xacmlrealm |
When performing any redaction or filtering, the XACMLRealm will set the action-id to "read". |
Service |
<SOAPAction> |
security-pep-interceptor |
If the PEP Interceptor is added to any SOAP-based web services for service authorization, the action-id will be the SOAPAction of the incoming request. This allows the XACML policy to have specific rules for individual services within the system. |
|
These are only the action-id values that are currently created by the components that come with DDF. Additional components can be created and added to DDF to identify specific action-ids. |
In the examples below, the policy has specified targets for the above type of calls. For the Filtering/Redaction code, the target was set for "filter", and the Service validation code targets were geared toward two services: query and LocalSiteName. In a production environment, these actions for service authorization will generally be full URNs that are described within the SOAP WSDL.
Attributes
Attributes for the XACML request are populated with the information in the calling subject and the resource being checked.
Subject
The attributes for the subject are obtained from the SAML claims and populated within the XACMLRealm as individual attributes under the urn:oasis:names:tc:xacml:1.0:subject-category:access-subject category. The name of the claim is used for the AttributeId value. Examples of the items being populated are available at the end of this page.
Resource
The attributes for resources are obtained through the permissions process. When checking permissions, the XACMLRealm retrieves a list of permissions that should be checked against the subject. These permissions are populated outside of the realm and should be populated with the security attributes located in the metacard security property. When the permissions are of a key-value type, the key being used is populated as the AttributeId value under the urn:oasis:names:tc:xacml:3.0:attribute-category:resource category.
Example Requests and Responses
The following items are a sample request, response, and the corresponding policy. For the XACML PDP, the request is made by the XACML realm (security-pdp-xacmlrealm), passed to the XACML processing engine (security-pdp-xacmlprocessor), which reads the policy and outputs a response.
Policy
This is the sample policy that was used for the following sample request and responses. The policy was made to handle the following actions: filter, query, and LocalSiteName. The filter action is used to compare subject’s SUBJECT_ACCESS attributes to metacard’s RESOURCE_ACCESS attributes. The query and LocalSiteName actions differ, as they are used to perform service authorization. For a query, the user must be associated with the country code ATA (Antarctica), and a LocalSiteName action can be performed by anyone.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
<Policy xmlns="urn:oasis:names:tc:xacml:3.0:core:schema:wd-17" PolicyId="xpath-target-single-req" RuleCombiningAlgId="urn:oasis:names:tc:xacml:3.0:rule-combining-algorithm:permit-overrides" Version="1.0">
<PolicyDefaults>
<XPathVersion>http://www.w3.org/TR/1999/REC-xpath-19991116</XPathVersion>
</PolicyDefaults>
<Target>
<AnyOf>
<AllOf>
<Match MatchId="urn:oasis:names:tc:xacml:1.0:function:string-equal">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">filter</AttributeValue>
<AttributeDesignator AttributeId="urn:oasis:names:tc:xacml:1.0:action:action-id" Category="urn:oasis:names:tc:xacml:3.0:attribute-category:action" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="true"/>
</Match>
</AllOf>
<AllOf>
<Match MatchId="urn:oasis:names:tc:xacml:1.0:function:string-equal">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">query</AttributeValue>
<AttributeDesignator AttributeId="urn:oasis:names:tc:xacml:1.0:action:action-id" Category="urn:oasis:names:tc:xacml:3.0:attribute-category:action" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="true"/>
</Match>
</AllOf>
<AllOf>
<Match MatchId="urn:oasis:names:tc:xacml:1.0:function:string-equal">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">LocalSiteName</AttributeValue>
<AttributeDesignator AttributeId="urn:oasis:names:tc:xacml:1.0:action:action-id" Category="urn:oasis:names:tc:xacml:3.0:attribute-category:action" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="true"/>
</Match>
</AllOf>
</AnyOf>
</Target>
<Rule Effect="Permit" RuleId="permit-filter">
<Target>
<AnyOf>
<AllOf>
<Match MatchId="urn:oasis:names:tc:xacml:1.0:function:string-equal">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">filter</AttributeValue>
<AttributeDesignator AttributeId="urn:oasis:names:tc:xacml:1.0:action:action-id" Category="urn:oasis:names:tc:xacml:3.0:attribute-category:action" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="true"/>
</Match>
</AllOf>
</AnyOf>
</Target>
<Condition>
<Apply FunctionId="urn:oasis:names:tc:xacml:1.0:function:string-subset">
<AttributeDesignator AttributeId="RESOURCE_ACCESS" Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="true"/>
<AttributeDesignator AttributeId="SUBJECT_ACCESS" Category="urn:oasis:names:tc:xacml:1.0:subject-category:access-subject" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="true"/>
</Apply>
</Condition>
</Rule>
<Rule Effect="Permit" RuleId="permit-action">
<Target>
<AnyOf>
<AllOf>
<Match MatchId="urn:oasis:names:tc:xacml:1.0:function:string-equal">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">ATA</AttributeValue>
<AttributeDesignator AttributeId="http://www.opm.gov/feddata/CountryOfCitizenship" Category="urn:oasis:names:tc:xacml:1.0:subject-category:access-subject" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="false"/>
</Match>
<Match MatchId="urn:oasis:names:tc:xacml:1.0:function:string-equal">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">query</AttributeValue>
<AttributeDesignator AttributeId="urn:oasis:names:tc:xacml:1.0:action:action-id" Category="urn:oasis:names:tc:xacml:3.0:attribute-category:action" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="false"/>
</Match>
</AllOf>
<AllOf>
<Match MatchId="urn:oasis:names:tc:xacml:1.0:function:string-equal">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">LocalSiteName</AttributeValue>
<AttributeDesignator AttributeId="urn:oasis:names:tc:xacml:1.0:action:action-id" Category="urn:oasis:names:tc:xacml:3.0:attribute-category:action" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="false"/>
</Match>
</AllOf>
</AnyOf>
</Target>
</Rule>
<Rule Effect="Deny" RuleId="deny-read"/>
</Policy>
Service Authorization
Allowed Query
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
<Request xmlns="urn:oasis:names:tc:xacml:3.0:core:schema:wd-17" ReturnPolicyIdList="false" CombinedDecision="false">
<Attributes Category="urn:oasis:names:tc:xacml:3.0:attribute-category:action">
<Attribute AttributeId="urn:oasis:names:tc:xacml:1.0:action:action-id" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">query</AttributeValue>
</Attribute>
</Attributes>
<Attributes Category="urn:oasis:names:tc:xacml:1.0:subject-category:access-subject">
<Attribute AttributeId="urn:oasis:names:tc:xacml:1.0:subject:subject-id" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">Test User</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">users</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">admin</AttributeValue>
</Attribute>
<Attribute AttributeId="SUBJECT_ACCESS" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">A</AttributeValue>
</Attribute>
<Attribute AttributeId="SUBJECT_ACCESS" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">B</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">testuser1</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">Test User</AttributeValue>
</Attribute>
<Attribute AttributeId="http://www.opm.gov/feddata/CountryOfCitizenship" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">ATA</AttributeValue>
</Attribute>
</Attributes>
</Request>
1
2
3
4
5
6
7
8
<Response xmlns="urn:oasis:names:tc:xacml:3.0:core:schema:wd-17">
<Result>
<Decision>Permit</Decision>
<Status>
<StatusCode Value="urn:oasis:names:tc:xacml:1.0:status:ok"/>
</Status>
</Result>
</Response>
Denied Query
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
<Request xmlns="urn:oasis:names:tc:xacml:3.0:core:schema:wd-17" ReturnPolicyIdList="false" CombinedDecision="false">
<Attributes Category="urn:oasis:names:tc:xacml:3.0:attribute-category:action">
<Attribute AttributeId="urn:oasis:names:tc:xacml:1.0:action:action-id" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">query</AttributeValue>
</Attribute>
</Attributes>
<Attributes Category="urn:oasis:names:tc:xacml:1.0:subject-category:access-subject">
<Attribute AttributeId="urn:oasis:names:tc:xacml:1.0:subject:subject-id" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">Test User USA</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">users</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">admin</AttributeValue>
</Attribute>
<Attribute AttributeId="SUBJECT_ACCESS" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">A</AttributeValue>
</Attribute>
<Attribute AttributeId="SUBJECT_ACCESS" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">B</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">testuser1</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">Test User</AttributeValue>
</Attribute>
<Attribute AttributeId="http://www.opm.gov/feddata/CountryOfCitizenship" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">USA</AttributeValue>
</Attribute>
</Attributes>
</Request>
1
2
3
4
5
6
7
8
<Response xmlns="urn:oasis:names:tc:xacml:3.0:core:schema:wd-17">
<Result>
<Decision>Deny</Decision>
<Status>
<StatusCode Value="urn:oasis:names:tc:xacml:1.0:status:ok"/>
</Status>
</Result>
</Response>
Metacard Authorization
Subject Permitted
All of the resource’s RESOURCE_ACCESS attributes were matched with the Subject’s SUBJECT_ACCESS attributes.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
<Request xmlns="urn:oasis:names:tc:xacml:3.0:core:schema:wd-17" ReturnPolicyIdList="false" CombinedDecision="false">
<Attributes Category="urn:oasis:names:tc:xacml:3.0:attribute-category:action">
<Attribute AttributeId="urn:oasis:names:tc:xacml:1.0:action:action-id" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">filter</AttributeValue>
</Attribute>
</Attributes>
<Attributes Category="urn:oasis:names:tc:xacml:1.0:subject-category:access-subject">
<Attribute AttributeId="urn:oasis:names:tc:xacml:1.0:subject:subject-id" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">Test User</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">users</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">admin</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">testuser1</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">Test User</AttributeValue>
</Attribute>
<Attribute AttributeId="SUBJECT_ACCESS" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">A</AttributeValue>
</Attribute>
<Attribute AttributeId="SUBJECT_ACCESS" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">B</AttributeValue>
</Attribute>
<Attribute AttributeId="http://www.opm.gov/feddata/CountryOfCitizenship" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">ATA</AttributeValue>
</Attribute>
</Attributes>
<Attributes Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource">
<Attribute AttributeId="RESOURCE_ACCESS" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">A</AttributeValue>
</Attribute>
</Attributes>
</Request>
1
2
3
4
5
6
7
8
<Response xmlns="urn:oasis:names:tc:xacml:3.0:core:schema:wd-17">
<Result>
<Decision>Deny</Decision>
<Status>
<StatusCode Value="urn:oasis:names:tc:xacml:1.0:status:ok"/>
</Status>
</Result>
</Response>
Subject Denied
The resource had an additional RESOURCE_ACCESS attribute 'C' that the subject did not have.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
<Request xmlns="urn:oasis:names:tc:xacml:3.0:core:schema:wd-17" ReturnPolicyIdList="false" CombinedDecision="false">
<Attributes Category="urn:oasis:names:tc:xacml:3.0:attribute-category:action">
<Attribute AttributeId="urn:oasis:names:tc:xacml:1.0:action:action-id" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">filter</AttributeValue>
</Attribute>
</Attributes>
<Attributes Category="urn:oasis:names:tc:xacml:1.0:subject-category:access-subject">
<Attribute AttributeId="urn:oasis:names:tc:xacml:1.0:subject:subject-id" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">Test User</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">users</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">admin</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">Test User</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">testuser1</AttributeValue>
</Attribute>
<Attribute AttributeId="SUBJECT_ACCESS" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">A</AttributeValue>
</Attribute>
<Attribute AttributeId="SUBJECT_ACCESS" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">B</AttributeValue>
</Attribute>
<Attribute AttributeId="http://www.opm.gov/feddata/CountryOfCitizenship" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">ATA</AttributeValue>
</Attribute>
</Attributes>
<Attributes Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource">
<Attribute AttributeId="RESOURCE_ACCESS" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">A</AttributeValue>
</Attribute>
<Attribute AttributeId="RESOURCE_ACCESS" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">B</AttributeValue>
</Attribute>
<Attribute AttributeId="RESOURCE_ACCESS" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">C</AttributeValue>
</Attribute>
</Attributes>
</Request>
1
2
3
4
5
6
7
8
<Response xmlns="urn:oasis:names:tc:xacml:3.0:core:schema:wd-17">
<Result>
<Decision>Deny</Decision>
<Status>
<StatusCode Value="urn:oasis:names:tc:xacml:1.0:status:ok"/>
</Status>
</Result>
</Response>
Expansion Service
The Expansion Service and its corresponding expansion-related commands provide an easy way for developers to add expansion capabilities to DDF user attributes and metadata card processing. In addition to these two defined uses of the expansion service, developers are free to utilize the service in their own implementations.
Each instance of the expansion service consists of a collection of rule sets. Each rule set consists of a key value and its associated set of rules. Callers of the expansion service provide a key and an original value to be expanded. The expansion service then looks up the set of rules for the specified key. The expansion service then cumulatively applies each of the rules in the set starting with the original value, with the resulting set of values being returned to the caller.
| Key (Attribute) | Rules (original → new) | |
|---|---|---|
key1 |
value1 |
replacement1 |
value2 |
replacement2 |
|
value3 |
replacement3 |
|
key2 |
value1 |
replacement1 |
value2 |
replacement2 |
|
The examples below use the following collection of rule sets:
| Key (Attribute) | Rules (original → new) | |
|---|---|---|
Location |
Goodyear |
Goodyear AZ |
AZ |
AZ USA |
|
CA |
CA USA |
|
Title |
VP-Sales |
VP-Sales VP Sales |
VP-Engineering |
VP-Engineering VP Engineering |
|
Note that the rules listed for each key are processed in order, so they may build upon each other, i.e., a new value from the new replacement string may be expanded by a subsequent rule.
Instances and Configuration
It is expected that multiple instances of the expansion service will be running at the same time. Each instance of the service defines a unique property that is useful for retrieving specific instances of the expansion service. The following table lists the two pre-defined instances used by DDF for expanding user attributes and metacard attributes respectively.
| Property Name | Value | Description |
|---|---|---|
mapping |
security.user.attribute.mapping |
This instance is configured with rules that expand the user’s attribute values for security checking. |
mapping |
security.metacard.attribute.mapping |
This instance is configured with rules that expand the metacard’s security attributes before comparing with the user’s attributes. |
Each instance of the expansion service can be configured using a configuration file. The configuration file can have three different types of lines:
* comments - any line prefixed with the # character is ignored as a comment (for readability, blank lines are also ignored)
* attribute separator - a line starting with separator= defines the attribute separator string.
* rule - all other lines are assumed to be rules defined in a string format <key>:<original value>:<new value>
The following configuration file defines the rules shown above in the example table (using the space as a separator):
# This defines the separator that will be used when the expansion string contains multiple # values - each will be separated by this string. The expanded string will be split at the # separator string and each resulting attributed added to the attribute set (duplicates are # suppressed). No value indicates the defualt value of ' ' (space). separator= # The following rules define the attribute expansion to be performed. The rules are of the # form: # <attribute name>:<original value>:<expanded value> # The rules are ordered, so replacements from the first rules may be found in the original # values of subsequent rules. Location:Goodyear:Goodyear AZ Location:AZ:AZ USA Location:CA:CA USA Title:VP-Sales:VP-Sales VP Sales Title:VP-Engineering:VP-Engineering VP Engineering
Expansion Commands
| Title | Namespace | Description |
|---|---|---|
DDF::Security::Expansion::Commands |
security |
The expansion commands provide detailed information about the expansion rules in place and the ability to see the results of expanding specific values against the active rule set. |
Expansion Commands
security:expand |
security:expansions |
Command Descriptions
| Command | Description |
|---|---|
expand |
Runs the expansion service on the provided data returning the expanded value. |
expansions |
Dumps the ruleset for each active expansion service. |
Expansion Command Examples and Explanation
security:expansions
The security:expansions command dumps the ruleset for each active expansion service. It takes no arguments and displays each rule on a separate line in the form: <attribute name> : <original string> : <expanded string>. The following example shows the results of executing the expansions command with no active expansion service.
ddf@local>security:expansions
No expansion services currently available.
After installing the expansions service and configuring it with an appropriate set of rules, the expansions command will provide output similar to the following:
ddf@local>security:expansions
Location : Goodyear : Goodyear AZ
Location : AZ : AZ USA
Location : CA : CA USA
Title : VP-Sales : VP-Sales VP Sales
Title : VP-Engineering : VP-Engineering VP Engineering
security:expand
The security:expand command runs the expansion service on the provided data. It takes an attribute and an original value, expands the original value using the current expansion service and rule set and dumps the results. For the rule set shown above, the expand command produces the following results:
ddf@local>security:expand Location Goodyear
[Goodyear, USA, AZ]
ddf@local>security:expand Title VP-Engineering
[VP-Engineering, Engineering, VP]
ddf@local>expand Title "VP-Engineering Manager"
[VP-Engineering, Engineering, VP, Manager]
Configure WSS Using Standalone Servers
DDF uses CAS as its single sign-on service. DDF uses LDAP and STS to keep track of users and user attributes. CAS, LDAP, and STS are integral, interconnected components of the DDF security scheme, and all can be installed on a local DDF instance with only a few feature installs (with the exception of the CAS installation, which requires Apache Tomcat to run). Setting up these authentication components to run externally, however, is more nuanced, so this page will provide step-by-step instructions detailing the configuration process.
This page assumes that there is a keystore for each of the services/servers. If using different keystore names, substitute the name provided in this document with the desired name for your setup. For this document, the following data is used:
Server |
Keystore File |
Comments |
CAS |
keystore.jks |
Used on the CAS Tomcat server. |
STS |
stsKeystore.jks |
Used to sign SAML and as incoming connections. |
DDF |
serverKeystore.jks |
Server used for incoming connections. |
link:Distributed WSS.pptx[Distributed WSS]
Authentication Components
It is implied that the three authentication components identified below are installed on three separate servers. Therefore, it is important to keep track of the DNS hostnames used on each server for certificate authentication purposes.
LDAP
LDAP is used to maintain a list of trusted DDF users and the attributes associated with them. It interacts with both CAS and the STS. The former uses LDAP to create session information, and the latter queries LDAP for user attributes and converts them to SAML claims.
-
Obtain and unzip the DDF kernel (ddf-distribution-kernel-<VERSION>.zip).
-
Start the distribution.
-
Deploy the Embedded LDAP application by copying the ldap-embedded-app-<VERSION>.kar into the <DISTRIBUTION_HOME>/deploy directory. Verify that the LDAP server is installed by checking the DDF log or by performing an la command and verifying that the OpenDJ bundle is in the active state. Additionally, it should be responding to LDAP requests on the default ports, which are 1389 and 1636.
-
Copy the environment’s Java keystore file into the {DISTRIBUTION}/etc/keystores folder, making sure it overwrites the folder’s existing serverKeystore.jks file.
|
It is very important that the keystore file used in the process is set up to trust the hostnames used by CAS and STS. If it is not, there will be certificate authentication issues for the user. |
CAS
CAS is used for SSO authentication purposes. Unlike LDAP and STS, CAS cannot be run as a DDF bundle. CAS must be run through Apache Tomcat.
-
Follow the instructions on the CAS installation page to install and configure Tomcat/CAS. For example, with LDAP above, the keystore.jks file that is used must trust the hostnames used by the STS server, LDAP server, and the DDF user connecting to CAS.
-
Open the {TOMCAT}/webapps/cas/WEB-INF/cas.properties file and modify the cas.ldap.host, cas.ldap.port, cas.ldap.user.dn, and cas.ldap.password fields with your environment’s LDAP information.
STS
The Security Token Service, unlike the LDAP, cannot currently be installed on a kernel distribution of DDF. To run an STS-only DDF installation, uninstall the catalog components that are not being used. This will increase performance. A list of unneeded components can be found on the STS page.
-
In the unzipped DDF distribution folder, open /etc/org.ops4j.pax.web.cfg and find the following line:
org.ops4j.pax.web.ssl.keystore=etc/keystores/serverKeystore.jksand change it to:
org.ops4j.pax.web.ssl.keystore=etc/keystores/stsKeystore.jks -
Update the password fields to the ones you keystore uses.
-
Verify that the stsKeystores.jks file in
/etc/keystorestrusts the hostnames used in your environment (the hostnames of LDAP, CAS, and any DDF users that make use of this STS server). -
Start the distribution.
-
Enter the following commands to install the features used by the STS server:
features:install security-sts-server features:install security-cas-tokenvalidator
-
Open the DDF web console as an administrator. The default user is "admin" with a password of "admin" (no quotes).
-
Navigate to the Configuration tab.
-
Open the Security STS LDAP Login configuration.
-
Verify that the LDAP URL, LDAP Bind User DN, and LDAP Bind User Password fields match your LDAP server’s information. The default DDF LDAP username is "cn=user", and the default password is "secret" (no quotes) . In a production environment, the username and password should be changed in the LDAP data file.
-
Select the Save button.
-
Open the Security STS LDAP and Roles Claims Handler configuration.
-
Populate the same URL, user, and password fields with your LDAP server information.
-
Select the Save button.
-
Open the Security STS CAS Token Validator configuration.
-
Under CAS Server URL, type the URL for your CAS server.
-
Select the Save button.
-
Open the Platform Global Configuration.
-
Change the protocol to https.
-
Populate the host/port information with the STS server’s host/port. For STS, the default port is 8993.
-
Update the Trust Store and Key Store location/password fields with your environment’s .jks files.
-
Select the Save button.
-
All of the authentication components should be running and configured at this point. The final step is to configure a DDF instance, so this authentication scheme is used.
Configuring DDF
Once everything is configured and running, hooking up an existing DDF instance to the authentication scheme is performed by setting a few configuration properties.
-
Verify that the
{DISTRIBUTION}/etc/keystoresfolder is updated with the correct keystores for your operating environment. -
Start the distribution.
-
Enter the following commands to install the CAS features:+
features:install security-cas-cxfservletfilter
-
Open the Security CAS Client configuration.
-
Under Server Name and Proxy Callback URL, replace the hostname of 'server' with your server hostname.
-
Under CAS Server URL, enter the hostname for the CAS server.
-
-
Open the Security STS Client configuration. Verify that the host/port information in the STS Address field points to your STS server.
-
Open the Platform Global Configuration.
-
Change the protocol to https.
-
Populate the host/port information with the DDF instance’s host/port. For DDF, the default port is 8993.
-
Update the Trust Store and Key Store location/password fields with your environment’s .jks files.
-
-
Select the Save button.
The DDF should now use the CAS/STS/LDAP servers when it attempts to authenticate a user upon an attempted log in.
Hardening
These instructions demonstrate how to harden a DDF system for a more secure installation.
|
The web administration console is not compatible with Internet Explorer 7. |
Disable the Web Console
To harden DDF for security purposes, disable the Web Console, so users do not have access. All configuration is performed using the DDF command line console and/or .cfg configuration files.
-
Open the Web Console: https://localhost:8993/system/console.
-
Enter the username (default is "admin") and the password (default is "admin").
-
Select the Features tab.
-
Select the Install/Uninstall button to uninstall the webconsole-base feature. If attempting to access a page on the Web Console, an HTTP 404 error occurs.
-
To re-enable the Web Console, open the Karaf command line console and install the
ddf-webconsolefeature.
features:install webconsole
Disable the Admin Console
-
==== Enable SSL for Clients In order for outbound secure connections (HTTPS) to be made from components like Federated Sources and Resource Readers configuration may need to be updated with keystores and security properties. These values are configured in the
<DDF_INSTALL_DIR>/etc/system.propertiesfile. The following values can be set:
| Property | Sample Value | Description |
|---|---|---|
javax.net.ssl.trustStore |
etc/keystores/serverKeystore.jks |
The java keystore that contains the trusted public certificates for Certificate Authorities (CA’s) that can be used to validate SSL Connections for outbound TLS/SSL connections (e.g. HTTPS). When making outbound secure connections a handshake will be done with the remote secure server and the CA that is in the signing chain for the remote server’s certificate must be present in the trust store for the secure connection to be successful. |
javax.net.ssl.trustStorePassword |
changeit |
This is the password for the truststore listed in the above property |
javax.net.ssl.keyStore |
etc/keystores/serverTruststore.jks |
The keystore that contains the private key for the local server that can be used to signing and encryption. This must be set if establishing outgoing 2-way (mutual) SSL connections where the local server must also present it’s certificate for the remote server to verify. |
javax.net.ssl.keyStorePassword |
changeit |
The password for the keystore listed above |
javax.net.ssl.keyStoreType |
jks |
The type of keystore |
https.cipherSuites |
TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA |
The cipher suites that are supported when making outbound HTTPS connections https.protocols TLSv1.1,TLSv1.2 The protocols that are supported when making outbound HTTPS connections |
Configure DDF without the Web Administration Console
|
Depending on the environment, it may be easier for integrators and administrators to configure DDF using the Web Console prior to disabling it and switching to SSL. The Web Console can be re-enabled for additional configuration changes. |
In an environment hardened for security purposes, access to the DDF Web Console is denied. It is necessary to configure DDF (e.g., providers, Schematron rulesets, etc.) using .cfg configuration files or the Karaf command line console. The OSGi container detects the addition, updating, or deletion of .cfg files in the etc/ddf directory.
The following sections describe how to configure each DDF item using both of these mechanisms. A template file is provided for each configurable DDF item so that it can be copied/renamed then modified with the appropriate settings.
|
If at a later time the Web Console is enabled again, all of the configuration done via .cfg files and/or the Karaf command line console are loaded and displayed. However, note that the name of the .cfg file is not used in the admin console. Rather, OSGi assigns a universally unique identifier (UUID) when the DDF item was created and displays this UUID in the console (e.g., OpenSearchSource.112f298e-26a5-4094-befc-79728f216b9b) |
Templates included with DDF:
| DDF Service | Template File Name | Factory PID | Configurable Properties (from DDF User’s Guide) |
|---|---|---|---|
DDF Catalog Framework |
ddf.catalog.impl.service.CatalogFrameworkImpl.cfg |
ddf.catalog.CatalogFrameworkImpl |
Standard Catalog Framework |
Configure Using a .cfg File Template
The following steps define the procedure for configuring a new source or feature using a config file.
-
Copy/rename the provided template file in the etc/templates directory to the etc directory. (Refer to the table above to determine correct template.)
-
Mandatory: The dash between the PID (e.g.,
OpenSearchSource_site.cfg) and the instance name (e.g.,OpenSearchSource_site.cfg) is required. The dash is a reserved character used by OSGi that identifies instances of a managed service factory that should be created. -
Not required, but a good practice is to change the instance name (e.g.,
federated_source) of the file to something identifiable (source1- ddf).
-
-
Edit the copied file to etc with the settings for the configuration. (Refer to the table above to determine the configurable properties).
-
This file is a Java properties file, hence the syntax is
<key>=<value>. -
Consult the inline comments in the file for guidance on what to modify.
-
The Configurable Properties tables in the Integrator’s Guide for the Included Catalog Components also describe each field and its value.
-
The new service can now be used as if it was created using the Web Console.
Configure Using the Command Line Console
Configuring a new source, provider, or feature using the command console follows a standard procedure. The properties and their values will change based on type of service being created, but the actual commands entered at the command line do not change. To help illustrate the commands, an example of creating a new OpenSearch federated source is shown after each step.
-
Create the federated source using the config:edit command.
-
Mandatory: The dash between the PID (e.g.,
OpenSearchSource_site.cfg) and the instance name (e.g.,OpenSearchSource_site.cfg) is required. The dash is a reserved character used by OSGi that identifies instances of a managed service factory that should be created.config:edit OpenSearchSource-my_federated_source
-
-
Enter the settings for the federated source’s properties. These properties can be found in the template file for the specified service.
config:propset endpointUrl https://ddf:8993/services/query?q={searchTerms}&src={fs:routeTo?}&mr={fs:maxResults?}&count={count?}&mt={fs:maxTimeout?}&dn={idn:userDN?}&lat={geo:lat?}&lon={geo:lon?}&radius={geo:radius?}&bbox={geo:box?}&polygon={geo:polygon?}&dtstart={time:start?}&dtend={time:end?}&dateName={cat:dateName?}&filter={fsa:filter?}&sort={fsa:sort?} -
Save the configuration updates.
config:update
-
The new service can now be used as if it was created using the Web Console.
Directory Permissions
|
DDF_HOME
DDF is installed in the DDF_HOME directory. |
Directory Permissions on Windows
Restrict access to sensitive files by ensuring that the only users with access privileges are administrators.
-
Right-click on the file or directory noted below then select Full Control → Administrators → System.
-
Click Properties → Security → Advanced and select Creator Owner for
DDF_HOME(e.g.,C:\ddf). -
Restrict access to sensitive files by ensuring that only System and Administrators have Full Control to the below files by right-clicking on the file or directory below then selecting Properties → Security → Advanced.
-
Delete any other groups or users listed with access to
DDF_HOME/etcandDDF_HOME/deploy.
Directory Permissions on *NIX
Protect the DDF from unauthorized access.
-
As root, change the owner and group of critical DDF directories to the NON_ROOT_USER.
A NON_ROOT_USER (e.g., ddf) is recommended for installation.
1 2 3
chown -R NON_ROOT_USER $DDF_HOME $DDF_HOME/etc $DDF_HOME/data chgrp -R NON_ROOT_USER $DDF_HOME/etc $DDF_HOME/data chmod -R og-w $DDF_HOME/etc $DDF_HOME/data
-
Restrict access to sensitive files by ensuring that the only users with “group” permissions (e.g., ddf-group) have access.
-
Execute the following command on the above files (examples assume DDF_HOME is /opt/ddf):
chmod -R o /opt/ddf -
As the the application owner (e.g., ddf user), restrict access to sensitive files.
1 2
chmod 640 /opt/ddf/etc chmod 640 /opt/ddf/deploy
|
The system administrator must restrict certain directories to ensure that the application (user) cannot access restricted directories on the system. For example the NON_ROOT_USER should only have read access to |
Deployment Guidelines
DDF relies on the Directory Permissions of the host platform to protect the integrity of the DDF during operation. System administrators should perform the following steps when deploying bundles added to the DDF.
-
Prior to allowing a hot deployment, check the available storage space on the system to ensure the deployment will not exceed the available space.
-
Set maximum storage space on the
DDF_HOME/deployandDDF_HOME/systemdirectories to restrict the amount of space used by deployments. -
Do not assume the deployment is from a trusted source; verify its origination.
-
Use the source code to verify a deployment is required for DDF to prevent unnecessary/vulnerable deployments.
Endpoint Schema Validation
SOAP Web Services may have WSDL validation enabled. Ensure that the bundle has WSDL schema validation enabled. These instructions assume the implementation made use of the Spring beans model. All DDF endpoint bundles follow this model.
-
Prior to deploying a bundle/feature, verify the schema validation if it is a DDF endpoint.
-
Modify the
beans.xmlfile-
In a terminal window, change directory to the feature directory under the DDF installation directory.
cd DDF_HOME/system/com/lmco/ddf/endpoint-bundle.jarUnzip the endpoint-bundle.jar.
unzip endpoint-bundle.jar -
Change directory to the
beans.xmlfile.cd META-INF/spring -
Open the
beans.xmlfile in an editor (e.g., vi). -
Search for
schema-validation-enabledand change its value to true.<entry key="schema-validation-enabled" value="true"/> -
Save and close the file.
-
Change directory to the feature directory,
cd ../..
-
-
Recreate the jar file (use zip or another archive tool).
zip endpoint-bundle.jar * -
Re-install the feature.
-
Open the Web browser and navigate to the DDF Web Console,
-
Select the Install button. Schema validation is now enabled for the endpoint.
-
Assuring Authenticity
DDF Artifacts in the JAR file format (such as bundles or DDF applications packaged as KAR files) can be signed and verified using the tools included as part of the Java Runtime Environment.
Prerequisites
To work with Java signatures, a keystore/truststore is required. For the purposes of this example we’ll sign and validate using a self signed certificate which can be generated with the keytool utility. In production a certificate issued from a trusted Certificate Authority should be used.
Additional documentation on keytool can be found at http://docs.oracle.com/javase/6/docs/technotes/tools/windows/keytool.html
~ $ keytool -genkey -keyalg RSA -alias selfsigned -keystore keystore.jks -storepass password -validity 360 -keysize 2048
What is your first and last name?
[Unknown]: Nick Fury
What is the name of your organizational unit?
[Unknown]: Marvel
What is the name of your organization?
[Unknown]: SHIELD
What is the name of your City or Locality?
[Unknown]: New York
What is the name of your State or Province?
[Unknown]: NY
What is the two-letter country code for this unit?
[Unknown]: US
Is CN=Nick Fury, OU=SHIELD, O=Marvel, L="New York", ST=NY, C=US correct?
[no]: yes
Enter key password for <selfsigned>
(RETURN if same as keystore password):
Re-enter new password:
Signing a JAR/KAR
Once a keystore is available, the
Additional documentation on jarsigner can be found at http://docs.oracle.com/javase/6/docs/technotes/tools/windows/jarsigner.html
~ $ jarsigner -keystore keystore.jks -keypass shield -storepass password catalog-app-2.5.1.kar selfsigned
Verifying a JAR/KAR
The jarsigner utility is also used to verify a signature in a JAR-formatted file.
~ $ jarsigner -verify -verbose -keystore keystore.jks catalog-app-2.5.1.kar
9447 Mon Oct 06 17:05:46 MST 2014 META-INF/MANIFEST.MF
9503 Mon Oct 06 17:05:46 MST 2014 META-INF/SELFSIGN.SF
1303 Mon Oct 06 17:05:46 MST 2014 META-INF/SELFSIGN.RSA
0 Wed Sep 17 17:14:06 MST 2014 META-INF/
0 Wed Sep 17 17:14:10 MST 2014 META-INF/maven/
0 Wed Sep 17 17:14:10 MST 2014 META-INF/maven/ddf.catalog/
0 Wed Sep 17 17:14:10 MST 2014 META-INF/maven/ddf.catalog/catalog-app/
smk 4080 Wed Sep 17 16:54:18 MST 2014 META-INF/maven/ddf.catalog/catalog-app/pom.xml
smk 107 Wed Sep 17 17:14:06 MST 2014 META-INF/maven/ddf.catalog/catalog-app/pom.properties
0 Wed Sep 17 17:14:06 MST 2014 repository/
0 Wed Sep 17 17:14:06 MST 2014 repository/ddf/
0 Wed Sep 17 17:14:06 MST 2014 repository/ddf/catalog/
0 Wed Sep 17 17:14:06 MST 2014 repository/ddf/catalog/catalog-app/
0 Wed Sep 17 17:14:06 MST 2014 repository/ddf/catalog/catalog-app/2.5.1/
smk 12543 Wed Sep 17 17:14:06 MST 2014 repository/ddf/catalog/catalog-app/2.5.1/catalog-app-2.5.1-features.xml
0 Wed Sep 17 17:14:06 MST 2014 repository/ddf/catalog/core/
0 Wed Sep 17 17:14:06 MST 2014 repository/ddf/catalog/core/catalog-core-api/
0 Wed Sep 17 17:14:06 MST 2014 repository/ddf/catalog/core/catalog-core-api/2.5.1/
smk 188995 Wed Sep 17 16:55:28 MST 2014 repository/ddf/catalog/core/catalog-core-api/2.5.1/catalog-core-api-2.5.1.jar
0 Wed Sep 17 17:14:06 MST 2014 repository/ddf/mime/
0 Wed Sep 17 17:14:06 MST 2014 repository/ddf/mime/core/
0 Wed Sep 17 17:14:06 MST 2014 repository/ddf/mime/core/mime-core-api/
0 Wed Sep 17 17:14:06 MST 2014 repository/ddf/mime/core/mime-core-api/2.5.0/
smk 4396 Wed Sep 10 12:38:24 MST 2014 repository/ddf/mime/core/mime-core-api/2.5.0/mime-core-api-2.5.0.jar
0 Wed Sep 17 17:14:06 MST 2014 repository/org/
0 Wed Sep 17 17:14:06 MST 2014 repository/org/apache/
0 Wed Sep 17 17:14:06 MST 2014 repository/org/apache/tika/
0 Wed Sep 17 17:14:06 MST 2014 repository/org/apache/tika/tika-core/
0 Wed Sep 17 17:14:06 MST 2014 repository/org/apache/tika/tika-core/1.2/
smk 463945 Thu Feb 13 09:26:04 MST 2014 repository/org/apache/tika/tika-core/1.2/tika-core-1.2.jar
0 Wed Sep 17 17:14:06 MST 2014 repository/org/apache/tika/tika-bundle/
0 Wed Sep 17 17:14:06 MST 2014 repository/org/apache/tika/tika-bundle/1.2/
smk 22360866 Thu Feb 13 09:26:54 MST 2014 repository/org/apache/tika/tika-bundle/1.2/tika-bundle-1.2.jar
0 Wed Sep 17 17:14:08 MST 2014 repository/org/codice/
0 Wed Sep 17 17:14:08 MST 2014 repository/org/codice/thirdparty/
0 Wed Sep 17 17:14:08 MST 2014 repository/org/codice/thirdparty/gt-opengis/
0 Wed Sep 17 17:14:08 MST 2014 repository/org/codice/thirdparty/gt-opengis/8.4_1/
smk 2335529 Thu Feb 13 09:32:42 MST 2014 repository/org/codice/thirdparty/gt-opengis/8.4_1/gt-opengis-8.4_1.jar
0 Wed Sep 17 17:14:08 MST 2014 repository/ddf/catalog/core/catalog-core-commons/
0 Wed Sep 17 17:14:08 MST 2014 repository/ddf/catalog/core/catalog-core-commons/2.5.1/
smk 38441 Wed Sep 17 16:56:10 MST 2014 repository/ddf/catalog/core/catalog-core-commons/2.5.1/catalog-core-commons-2.5.1.jar
0 Wed Sep 17 17:14:08 MST 2014 repository/ddf/catalog/core/catalog-core-camelcomponent/
0 Wed Sep 17 17:14:08 MST 2014 repository/ddf/catalog/core/catalog-core-camelcomponent/2.5.1/
smk 103672 Wed Sep 17 16:57:30 MST 2014 repository/ddf/catalog/core/catalog-core-camelcomponent/2.5.1/catalog-core-camelcomponent-2.5.1.jar
0 Wed Sep 17 17:14:08 MST 2014 repository/ddf/measure/
0 Wed Sep 17 17:14:08 MST 2014 repository/ddf/measure/measure-api/
0 Wed Sep 17 17:14:08 MST 2014 repository/ddf/measure/measure-api/2.5.1/
smk 609307 Wed Sep 17 16:54:52 MST 2014 repository/ddf/measure/measure-api/2.5.1/measure-api-2.5.1.jar
0 Wed Sep 17 17:14:08 MST 2014 repository/org/codice/thirdparty/picocontainer/
0 Wed Sep 17 17:14:08 MST 2014 repository/org/codice/thirdparty/picocontainer/1.2_1/
smk 10819 Thu Feb 13 09:32:42 MST 2014 repository/org/codice/thirdparty/picocontainer/1.2_1/picocontainer-1.2_1.jar
0 Wed Sep 17 17:14:08 MST 2014 repository/org/codice/thirdparty/vecmath/
0 Wed Sep 17 17:14:08 MST 2014 repository/org/codice/thirdparty/vecmath/1.3.2_1/
smk 90446 Thu Feb 13 09:32:42 MST 2014 repository/org/codice/thirdparty/vecmath/1.3.2_1/vecmath-1.3.2_1.jar
0 Wed Sep 17 17:14:08 MST 2014 repository/org/codice/thirdparty/geotools-suite/
0 Wed Sep 17 17:14:08 MST 2014 repository/org/codice/thirdparty/geotools-suite/8.4_1/
smk 25175516 Thu Feb 13 09:33:40 MST 2014 repository/org/codice/thirdparty/geotools-suite/8.4_1/geotools-suite-8.4_1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/org/codice/thirdparty/jts/
0 Wed Sep 17 17:14:10 MST 2014 repository/org/codice/thirdparty/jts/1.12_1/
smk 663441 Thu Feb 13 09:33:44 MST 2014 repository/org/codice/thirdparty/jts/1.12_1/jts-1.12_1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/core/catalog-core-federationstrategy/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/core/catalog-core-federationstrategy/2.5.1/
smk 155049 Wed Sep 17 17:01:02 MST 2014 repository/ddf/catalog/core/catalog-core-federationstrategy/2.5.1/catalog-core-federationstrategy-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/org/codice/thirdparty/lucene-core/
0 Wed Sep 17 17:14:10 MST 2014 repository/org/codice/thirdparty/lucene-core/3.0.2_1/
smk 1041824 Thu Feb 13 09:33:48 MST 2014 repository/org/codice/thirdparty/lucene-core/3.0.2_1/lucene-core-3.0.2_1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/core/ddf-pubsub/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/core/ddf-pubsub/2.5.1/
smk 152993 Wed Sep 17 16:58:18 MST 2014 repository/ddf/catalog/core/ddf-pubsub/2.5.1/ddf-pubsub-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/core/catalog-core-eventcommands/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/core/catalog-core-eventcommands/2.5.1/
smk 11132 Wed Sep 17 17:01:10 MST 2014 repository/ddf/catalog/core/catalog-core-eventcommands/2.5.1/catalog-core-eventcommands-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/core/ddf-pubsub-tracker/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/core/ddf-pubsub-tracker/2.5.1/
smk 6130 Wed Sep 17 17:05:52 MST 2014 repository/ddf/catalog/core/ddf-pubsub-tracker/2.5.1/ddf-pubsub-tracker-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/core/catalog-core-urlresourcereader/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/core/catalog-core-urlresourcereader/2.5.1/
smk 84648 Wed Sep 17 16:57:00 MST 2014 repository/ddf/catalog/core/catalog-core-urlresourcereader/2.5.1/catalog-core-urlresourcereader-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/core/filter-proxy/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/core/filter-proxy/2.5.1/
smk 33497 Wed Sep 17 16:56:24 MST 2014 repository/ddf/catalog/core/filter-proxy/2.5.1/filter-proxy-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/core/catalog-core-commands/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/core/catalog-core-commands/2.5.1/
smk 664977 Wed Sep 17 16:56:34 MST 2014 repository/ddf/catalog/core/catalog-core-commands/2.5.1/catalog-core-commands-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/core/catalog-core-metacardgroomerplugin/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/core/catalog-core-metacardgroomerplugin/2.5.1/
smk 31421 Wed Sep 17 17:06:04 MST 2014 repository/ddf/catalog/core/catalog-core-metacardgroomerplugin/2.5.1/catalog-core-metacardgroomerplugin-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/core/metacard-type-registry/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/core/metacard-type-registry/2.5.1/
smk 6349 Wed Sep 17 17:05:58 MST 2014 repository/ddf/catalog/core/metacard-type-registry/2.5.1/metacard-type-registry-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/core/catalog-core-standardframework/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/core/catalog-core-standardframework/2.5.1/
smk 4930895 Wed Sep 17 16:58:40 MST 2014 repository/ddf/catalog/core/catalog-core-standardframework/2.5.1/catalog-core-standardframework-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/core/catalog-core-resourcesizeplugin/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/core/catalog-core-resourcesizeplugin/2.5.1/
smk 4889822 Wed Sep 17 17:06:42 MST 2014 repository/ddf/catalog/core/catalog-core-resourcesizeplugin/2.5.1/catalog-core-resourcesizeplugin-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/core/fanout-catalogframework/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/core/fanout-catalogframework/2.5.1/
smk 9707692 Wed Sep 17 17:01:20 MST 2014 repository/ddf/catalog/core/fanout-catalogframework/2.5.1/fanout-catalogframework-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/core/catalog-core-metricsplugin/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/core/catalog-core-metricsplugin/2.5.1/
smk 708240 Wed Sep 17 16:57:38 MST 2014 repository/ddf/catalog/core/catalog-core-metricsplugin/2.5.1/catalog-core-metricsplugin-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/core/catalog-core-sourcemetricsplugin/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/core/catalog-core-sourcemetricsplugin/2.5.1/
smk 709297 Wed Sep 17 17:06:14 MST 2014 repository/ddf/catalog/core/catalog-core-sourcemetricsplugin/2.5.1/catalog-core-sourcemetricsplugin-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/schematron/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/schematron/catalog-schematron-plugin/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/schematron/catalog-schematron-plugin/2.5.1/
smk 19034 Wed Sep 17 17:09:08 MST 2014 repository/ddf/catalog/schematron/catalog-schematron-plugin/2.5.1/catalog-schematron-plugin-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/rest/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/rest/catalog-rest-endpoint/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/rest/catalog-rest-endpoint/2.5.1/
smk 151862 Wed Sep 17 17:12:54 MST 2014 repository/ddf/catalog/rest/catalog-rest-endpoint/2.5.1/catalog-rest-endpoint-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/opensearch/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/opensearch/catalog-opensearch-endpoint/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/opensearch/catalog-opensearch-endpoint/2.5.1/
smk 465789 Wed Sep 17 17:12:26 MST 2014 repository/ddf/catalog/opensearch/catalog-opensearch-endpoint/2.5.1/catalog-opensearch-endpoint-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/org/apache/abdera/
0 Wed Sep 17 17:14:10 MST 2014 repository/org/apache/abdera/abdera-extensions-opensearch/
0 Wed Sep 17 17:14:10 MST 2014 repository/org/apache/abdera/abdera-extensions-opensearch/1.1.3/
smk 33785 Thu Feb 13 09:31:18 MST 2014 repository/org/apache/abdera/abdera-extensions-opensearch/1.1.3/abdera-extensions-opensearch-1.1.3.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/org/apache/abdera/abdera-server/
0 Wed Sep 17 17:14:10 MST 2014 repository/org/apache/abdera/abdera-server/1.1.3/
smk 162766 Thu Feb 13 09:31:18 MST 2014 repository/org/apache/abdera/abdera-server/1.1.3/abdera-server-1.1.3.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/opensearch/catalog-opensearch-source/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/opensearch/catalog-opensearch-source/2.5.1/
smk 136957 Wed Sep 17 17:13:04 MST 2014 repository/ddf/catalog/opensearch/catalog-opensearch-source/2.5.1/catalog-opensearch-source-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/commons-codec/
0 Wed Sep 17 17:14:10 MST 2014 repository/commons-codec/commons-codec/
0 Wed Sep 17 17:14:10 MST 2014 repository/commons-codec/commons-codec/1.4/
smk 58160 Thu Feb 13 09:33:48 MST 2014 repository/commons-codec/commons-codec/1.4/commons-codec-1.4.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/org/apache/servicemix/
0 Wed Sep 17 17:14:10 MST 2014 repository/org/apache/servicemix/bundles/
0 Wed Sep 17 17:14:10 MST 2014 repository/org/apache/servicemix/bundles/org.apache.servicemix.bundles.axiom-impl/
0 Wed Sep 17 17:14:10 MST 2014 repository/org/apache/servicemix/bundles/org.apache.servicemix.bundles.axiom-impl/1.2.12-2/
smk 121899 Thu Feb 13 09:33:48 MST 2014 repository/org/apache/servicemix/bundles/org.apache.servicemix.bundles.axiom-impl/1.2.12-2/org.apache.servicemix.bundles.axiom-impl-1.2.12-2.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/org/apache/ws/
0 Wed Sep 17 17:14:10 MST 2014 repository/org/apache/ws/commons/
0 Wed Sep 17 17:14:10 MST 2014 repository/org/apache/ws/commons/axiom/
0 Wed Sep 17 17:14:10 MST 2014 repository/org/apache/ws/commons/axiom/axiom-api/
0 Wed Sep 17 17:14:10 MST 2014 repository/org/apache/ws/commons/axiom/axiom-api/1.2.10/
smk 417361 Thu Feb 13 09:33:50 MST 2014 repository/org/apache/ws/commons/axiom/axiom-api/1.2.10/axiom-api-1.2.10.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/org/apache/abdera/abdera-core/
0 Wed Sep 17 17:14:10 MST 2014 repository/org/apache/abdera/abdera-core/1.1.3/
smk 160895 Thu Feb 13 09:24:52 MST 2014 repository/org/apache/abdera/abdera-core/1.1.3/abdera-core-1.1.3.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/org/apache/abdera/abdera-client/
0 Wed Sep 17 17:14:10 MST 2014 repository/org/apache/abdera/abdera-client/1.1.3/
smk 62059 Thu Feb 13 09:24:52 MST 2014 repository/org/apache/abdera/abdera-client/1.1.3/abdera-client-1.1.3.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/org/apache/abdera/abdera-i18n/
0 Wed Sep 17 17:14:10 MST 2014 repository/org/apache/abdera/abdera-i18n/1.1.3/
smk 622568 Thu Feb 13 09:24:54 MST 2014 repository/org/apache/abdera/abdera-i18n/1.1.3/abdera-i18n-1.1.3.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/org/apache/servicemix/bundles/org.apache.servicemix.bundles.abdera-parser/
0 Wed Sep 17 17:14:10 MST 2014 repository/org/apache/servicemix/bundles/org.apache.servicemix.bundles.abdera-parser/1.1.3_1/
smk 1379508 Thu Feb 13 09:33:54 MST 2014 repository/org/apache/servicemix/bundles/org.apache.servicemix.bundles.abdera-parser/1.1.3_1/org.apache.servicemix.bundles.abdera-parser-1.1.3_1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/org/apache/servicemix/bundles/org.apache.servicemix.bundles.dom4j/
0 Wed Sep 17 17:14:10 MST 2014 repository/org/apache/servicemix/bundles/org.apache.servicemix.bundles.dom4j/1.6.1_5/
smk 325676 Thu Feb 13 09:33:56 MST 2014 repository/org/apache/servicemix/bundles/org.apache.servicemix.bundles.dom4j/1.6.1_5/org.apache.servicemix.bundles.dom4j-1.6.1_5.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/org/apache/servicemix/bundles/org.apache.servicemix.bundles.jdom/
0 Wed Sep 17 17:14:10 MST 2014 repository/org/apache/servicemix/bundles/org.apache.servicemix.bundles.jdom/1.1.2_1/
smk 160101 Thu Feb 13 09:33:56 MST 2014 repository/org/apache/servicemix/bundles/org.apache.servicemix.bundles.jdom/1.1.2_1/org.apache.servicemix.bundles.jdom-1.1.2_1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/org/codice/thirdparty/commons-httpclient/
0 Wed Sep 17 17:14:10 MST 2014 repository/org/codice/thirdparty/commons-httpclient/3.1.0_1/
smk 306098 Thu Feb 13 09:33:56 MST 2014 repository/org/codice/thirdparty/commons-httpclient/3.1.0_1/commons-httpclient-3.1.0_1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/plugin/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/plugin/plugin-federation-replication/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/plugin/plugin-federation-replication/2.5.1/
smk 8986 Wed Sep 17 17:12:02 MST 2014 repository/ddf/catalog/plugin/plugin-federation-replication/2.5.1/plugin-federation-replication-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/transformer/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/transformer/catalog-transformer-metadata/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/transformer/catalog-transformer-metadata/2.5.1/
smk 32559 Wed Sep 17 17:09:44 MST 2014 repository/ddf/catalog/transformer/catalog-transformer-metadata/2.5.1/catalog-transformer-metadata-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/transformer/catalog-transformer-thumbnail/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/transformer/catalog-transformer-thumbnail/2.5.1/
smk 32578 Wed Sep 17 17:09:52 MST 2014 repository/ddf/catalog/transformer/catalog-transformer-thumbnail/2.5.1/catalog-transformer-thumbnail-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/transformer/service-xslt-transformer/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/transformer/service-xslt-transformer/2.5.1/
smk 47227 Wed Sep 17 17:09:28 MST 2014 repository/ddf/catalog/transformer/service-xslt-transformer/2.5.1/service-xslt-transformer-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/transformer/catalog-transformer-resource/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/transformer/catalog-transformer-resource/2.5.1/
smk 83019 Wed Sep 17 17:09:34 MST 2014 repository/ddf/catalog/transformer/catalog-transformer-resource/2.5.1/catalog-transformer-resource-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/transformer/tika-input-transformer/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/transformer/tika-input-transformer/2.5.1/
smk 32522 Wed Sep 17 17:10:06 MST 2014 repository/ddf/catalog/transformer/tika-input-transformer/2.5.1/tika-input-transformer-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/transformer/geojson-metacard-transformer/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/transformer/geojson-metacard-transformer/2.5.1/
smk 9004 Wed Sep 17 17:10:22 MST 2014 repository/ddf/catalog/transformer/geojson-metacard-transformer/2.5.1/geojson-metacard-transformer-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/transformer/geojson-queryresponse-transformer/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/transformer/geojson-queryresponse-transformer/2.5.1/
smk 53446 Wed Sep 17 17:10:28 MST 2014 repository/ddf/catalog/transformer/geojson-queryresponse-transformer/2.5.1/geojson-queryresponse-transformer-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/transformer/geojson-input-transformer/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/transformer/geojson-input-transformer/2.5.1/
smk 35487 Wed Sep 17 17:10:16 MST 2014 repository/ddf/catalog/transformer/geojson-input-transformer/2.5.1/geojson-input-transformer-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/transformer/service-atom-transformer/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/transformer/service-atom-transformer/2.5.1/
smk 38484 Wed Sep 17 17:10:40 MST 2014 repository/ddf/catalog/transformer/service-atom-transformer/2.5.1/service-atom-transformer-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/org/apache/abdera/abdera-extensions-geo/
0 Wed Sep 17 17:14:10 MST 2014 repository/org/apache/abdera/abdera-extensions-geo/1.1.3/
smk 28410 Thu Feb 13 09:24:52 MST 2014 repository/org/apache/abdera/abdera-extensions-geo/1.1.3/abdera-extensions-geo-1.1.3.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/common/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/common/geo-formatter/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/common/geo-formatter/2.5.1/
smk 15970 Wed Sep 17 16:55:18 MST 2014 repository/ddf/catalog/common/geo-formatter/2.5.1/geo-formatter-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/com/
0 Wed Sep 17 17:14:10 MST 2014 repository/com/googlecode/
0 Wed Sep 17 17:14:10 MST 2014 repository/com/googlecode/json-simple/
0 Wed Sep 17 17:14:10 MST 2014 repository/com/googlecode/json-simple/json-simple/
0 Wed Sep 17 17:14:10 MST 2014 repository/com/googlecode/json-simple/json-simple/1.1.1/
smk 23931 Thu Feb 13 09:24:52 MST 2014 repository/com/googlecode/json-simple/json-simple/1.1.1/json-simple-1.1.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/transformer/catalog-transformer-xml/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/transformer/catalog-transformer-xml/2.5.1/
smk 1954994 Wed Sep 17 17:11:02 MST 2014 repository/ddf/catalog/transformer/catalog-transformer-xml/2.5.1/catalog-transformer-xml-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/commons-collections/
0 Wed Sep 17 17:14:10 MST 2014 repository/commons-collections/commons-collections/
0 Wed Sep 17 17:14:10 MST 2014 repository/commons-collections/commons-collections/3.2.1/
smk 575389 Thu Feb 13 09:24:34 MST 2014 repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/security/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/security/catalog-security-filter/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/security/catalog-security-filter/2.5.1/
smk 6492 Wed Sep 17 17:13:40 MST 2014 repository/ddf/catalog/security/catalog-security-filter/2.5.1/catalog-security-filter-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/security/catalog-security-plugin/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/security/catalog-security-plugin/2.5.1/
smk 5463 Wed Sep 17 17:13:50 MST 2014 repository/ddf/catalog/security/catalog-security-plugin/2.5.1/catalog-security-plugin-2.5.1.jar
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/security/catalog-security-logging/
0 Wed Sep 17 17:14:10 MST 2014 repository/ddf/catalog/security/catalog-security-logging/2.5.1/
smk 6768 Wed Sep 17 17:13:58 MST 2014 repository/ddf/catalog/security/catalog-security-logging/2.5.1/catalog-security-logging-2.5.1.jar
s = signature was verified
m = entry is listed in manifest
k = at least one certificate was found in keystore
i = at least one certificate was found in identity scope
jar verified.
Note the last line: jar verified. This indicates that the signatures used to sign the JAR (or in this case, KAR) were valid according to the trust relationships specified by the keystore.
Security
Configure STS Subject DN constraints
The configuration for the STS Subject DN constraints is located in the ws-security.subject.cert.constraints property`in the `<DDF_INSTALL_DIR>/etc/system.properties file.
The default constraints are ".*", which allows any connection with a trusted certificate the ability to access the STS. This opens up the vulnerability for a client to "pretend" to be another client. To prevent this, modify this property to be a list of regular expressions that map to the fully qualified hostnames of systems allowed to contact the STS.
Manage Users and Passwords
The default security configuration uses a property file located in DDF_HOME/etc/user.properties to store users and passwords.
The default Web Console user is "admin" (no quotes) with a password of "admin" (no quotes). Change this password to a more secure password by editing this file.
Format: USER=PASSWORD,ROLE1,ROLE2,.... Current default: admin=admin,admin
Enable Password Encryption
In the DDF Text Console, enter the following commands:
ddf@local> config:edit --force org.apache.karaf.jaas ddf@local> config:propset encryption.enabled true ddf@local> config:update ddf@local> dev:restart
The passwords will then be encrypted in the users.properties file once DDF restarts.
Known Issues
A system administrator must block visual access to the screen when administering passwords for particular components, such as the OpenSearch source. This is a known issue and will be addressed in a future version of DDF.
Starting DDF
Follow the below steps to start and stop DDF.
Start DDF
*NIX
Run the following script from a command shell to start the distribution and open a local console:
DDF_INSTALL/bin/ddf
Windows
Run the following script from a console window to start the distribution and open a local console:
DDF_INSTALL/bin/ddf.bat
Stop DDF
There are two options:
-
Call shutdown from the console:
ddf@local>shutdown
ddf@local>shutdown -f
-
Or run the stop script:
DDF_INSTALL/bin/stop
DDF_INSTALL/bin/stop.bat
|
Shut Down
Do not shut down by closing the window (Windows, Unix) or using the |
Automatic Start on System Boot
Because DDF is built on top of Apache Karaf, DDF can use the Karaf Wrapper to enable automatic startup and shutdown.
-
Create the Karaf wrapper.
Within the DDF consoleddf@local> features:install wrapper ddf@local> wrapper:install -s AUTO_START -n ddf -d ddf -D "DDF Service"
-
(Windows users skip to next step) (All *NIX) If DDF was installed to run as a non-root user (recommended,) edit
DDF_INSTALL/bin/ddf-service.Change:
DDF_INSTALL/bin/ddf-service#RUN_AS_USER=
to:
DDF_INSTALL/bin/ddf-serviceRUN_AS_USER=<ddf-user>
-
Set the memory in the wrapper config to match with DDF default memory setting.
-
Add the setting for PermGen space under the JVM Parameters section.
-
Update heap space to 2048.
DDF_INSTALL/etc/ddf-wrapper.conf1 2 3 4 5 6 7 8 9
#Add the following: wrapper.java.additional.11=-Dderby.system.home="..\data\derby" wrapper.java.additional.12=-Dderby.storage.fileSyncTransactionLog=true wrapper.java.additional.13=-Dcom.sun.management.jmxremote wrapper.java.additional.14=-Dfile.encoding=UTF8 wrapper.java.additional.15=-Dddf.home=%DDF_HOME% #Update the following: wrapper.java.maxmemory=2048
-
-
Set the
DDF_HOMEproperty.DDF_INSTALL/etc/ddf-wrapper.confset.default.DDF_HOME="%KARAF_HOME%"
-
Install the wrapper startup/shutdown scripts.
Windows
Run the following command in a console window. The command must be run with elevated permissions.
DDF_INSTALL/bin/ddf-service.bat install
Startup and shutdown settings can then be managed through Services MMC Start → Control Panel → Administrative Tools → Services.
Redhat
root@localhost# ln -s DDF_INSTALL/bin/ddf-service /etc/init.d/ root@localhost# chkconfig ddf-service --add root@localhost# chkconfig ddf-service on
Ubuntu
root@localhost# ln -s DDF_INSTALL/bin/ddf-service /etc/init.d/ root@localhost# update-rc.d -f ddf-service defaults
Solaris
root@localhost# ln -s DDF_INSTALL/bin/ddf-service /etc/init.d/ root@localhost# ln -s /etc/init.d/ddf-service /etc/rc0.d/K20ddf-service root@localhost# ln -s /etc/init.d/ddf-service /etc/rc1.d/K20ddf-service root@localhost# ln -s /etc/init.d/ddf-service /etc/rc2.d/K20ddf-service root@localhost# ln -s /etc/init.d/ddf-service /etc/rc3.d/S20ddf-service
While it is not a necessary step, information on how to convert the System V init scripts to the Solaris System Management Facility can be found at http://www.oracle.com/technetwork/articles/servers-storage-admin/scripts-to-smf-1641705.html
Solaris-Specific ModificationDue to a slight difference between the Linux and Solaris implementation of the ps command, the ddf-service script needs to be modified.
-
Locate the following line in DDF_INSTALL/bin/ddf-service
Solaris DDF_INSTALL/bin/ddf-servicepidtest=`$PSEXE -p $pid -o command | grep $WRAPPER_CMD | tail -1`
-
Change the word command to comm.
Solaris DDF_Install/bin/ddf-servicepidtest=`$PSEXE -p $pid -o comm | grep $WRAPPER_CMD | tail -1`
Karaf Documentation
Because DDF is built on Apache Karaf, more information on operating DDF can be found in the Karaf documentation at http://karaf.apache.org/index/documentation.html.
Console Commands
Once the distribution has started, users will have access to a powerful command line console. This text console can be used to manage services, install new features and applications, and manage the state of the system.
Access the System Console
The Command Line Shell Console is the console that is available to the user when the distribution is started manually. It may also be accessed from the Web Console through the Gogo tab or by using the bin/client.bat or bin/client.sh scripts. For more information on how to use the client scripts or how to remote into the the shell console, see Using Remote Instances.
Example Commands
View Bundle Status
Call osgi:list on the console to view the status of the bundles loaded in the distribution.
View Installed Features
Execute features:list to view the features installed in the distribution.
|
The majority of functionality and information available on the Web Console is also available on the Command Line Shell Console. |
Catalog Commands
| Title | Namespace | Description |
|---|---|---|
DDF:: Catalog :: Core :: Commands |
catalog |
The Catalog Shell Commands are meant to be used with any CatalogProvider implementations. They provide general useful queries and functions against the Catalog API that can be used for debugging, printing, or scripting. |
|
Most commands can bypass the Catalog framework and interact directly with the Catalog provider if given the --provider option, if available. No pre/post plugins are executed and no message validation is performed if the provider (--provider) option is used. |
Commands
catalog:describe catalog:dump catalog:envlist catalog:ingest catalog:inspect catalog:latest catalog:migrate catalog:range catalog:remove catalog:removeall catalog:replicate catalog:search catalog:spatial
Command Descriptions
| Command | Description | ||
|---|---|---|---|
describe |
Provides a basic description of the Catalog implementation. |
||
dump |
Exports metacards from the local Catalog. Does not remove them. See below for date filtering options. |
||
envlist |
Provides a list of environment variables. |
||
ingest |
Ingests data files into the Catalog. |
||
inspect |
Provides the various fields of a metacard for inspection. |
||
latest |
Retrieves the latest records from the Catalog based on the Metacard.MODIFIED date. |
||
migrate |
Allows two CatalogProviders to be configured and migrates the data from the primary to the secondary. |
||
range |
Searches by the given range arguments (exclusively). |
||
remove |
Deletes a record from the local Catalog. |
||
removeall |
Attempts to delete all records from the local Catalog. |
||
replicate |
Replicates data from a federated source into the local Catalog. |
||
search |
Searches records in the local Catalog. |
||
spatial |
Searches spatially the local Catalog. |
Available System Console Commands
To get a list of commands, type in the namespace of the desired extension then press the Tab key.
For example, type catalog, then press Tab.
System Console Command Help
For details on any command, type help then the command. For example, help search (see results of this command in the example below).
ddf@local>help search
DESCRIPTION
catalog:search
Searches records in the catalog provider.
SYNTAX
catalog:search [options] SEARCH_PHRASE [NUMBER_OF_ITEMS]
ARGUMENTS
SEARCH_PHRASE
Phrase to query the catalog provider.
NUMBER_OF_ITEMS
Number of maximum records to display.
(defaults to -1)
OPTIONS
--help
Display this help message
case-sensitive, -c
Makes the search case sensitive
-p, -provider
Interacts with the provider directly instead of the framework.
The help command provides a description of the provided command, along with the syntax in how to use it, arguments it accepts, and available options.
catalog:dump Options
The catalog:dump command was extended in DDF version 2.5.0 to provide selective export of metacards based on date ranges.
The --created-after and --created-before options allow filtering on the date and time that the metacard was created, while --modified-after and --modified-before options allow filtering on the date and time that the metacard was last modified (which is the created date if no other modifications were made). These date ranges are exclusive (i.e., if the date and time match exactly, the metacard will not be included). The date filtering options (--created-after, --created-before, --modified-after, and --modified-before) can be used in any combination, with the export result including only metacards that match all of the provided conditions.
If no date filtering options are provided, created and modified dates are ignored, so that all metacards match.
Date Syntax
Supported dates are taken from the common subset of ISO8601, matching the datetime from the following syntax:
datetime = time | date-opt-time
time = 'T' time-element [offset]
date-opt-time = date-element ['T' [time-element] [offset]]
date-element = std-date-element | ord-date-element | week-date-element
std-date-element = yyyy ['-' MM ['-' dd]]
ord-date-element = yyyy ['-' DDD]
week-date-element = xxxx '-W' ww ['-' e]
time-element = HH [minute-element] | [fraction]
minute-element = ':' mm [second-element] | [fraction]
second-element = ':' ss [fraction]
fraction = ('.' | ',') digit+
offset = 'Z' | (('+' | '-') HH [':' mm [':' ss [('.' | ',') SSS]]]
Examples
ddf@local>// Given we've ingested a few metacards ddf@local>catalog:latest # ID Modified Date Title 1 a6e9ae09c792438e92a3c9d7452a449f 2014-06-13T09:56:18+10:00 2 b4aced45103a400da42f3b319e58c3ed 2014-06-13T09:52:12+10:00 3 a63ab22361e14cee9970f5284e8eb4e0 2014-06-13T09:49:36+10:00 myTitle ddf@local>// Filter out older files ddf@local>catalog:dump --created-after 2014-06-13T09:55:00+10:00 /home/bradh/ddf-catalog-dump 1 file(s) dumped in 0.015 seconds ddf@local>// Filter out new file ddf@local>catalog:dump --created-before 2014-06-13T09:55:00+10:00 /home/bradh/ddf-catalog-dump 2 file(s) dumped in 0.023 seconds ddf@local>// Choose middle file ddf@local>catalog:dump --created-after 2014-06-13T09:50:00+10:00 --created-before 2014-06-13T09:55:00+10:00 /home/bradh/ddf-catalog-dump 1 file(s) dumped in 0.020 seconds ddf@local>// Modified dates work the same way ddf@local>catalog:dump --modified-after 2014-06-13T09:50:00+10:00 --modified-before 2014-06-13T09:55:00+10:00 /home/bradh/ddf-catalog-dump 1 file(s) dumped in 0.015 seconds ddf@local>// Can mix and match, most restrictive limits apply ddf@local>catalog:dump --modified-after 2014-06-13T09:45:00+10:00 --modified-before 2014-06-13T09:55:00+10:00 --created-before 2014-06-13T09:50:00+10:00 /home/bradh/ddf-catalog-dump 1 file(s) dumped in 0.024 seconds ddf@local>// Can use UTC instead of (or in combination with) explicit timezone offset ddf@local>catalog:dump --modified-after 2014-06-13T09:50:00+10:00 --modified-before 2014-06-13T09:55:00Z /home/bradh/ddf-catalog-dump 2 file(s) dumped in 0.020 seconds ddf@local>catalog:dump --modified-after 2014-06-13T09:50:00+10:00 --modified-before 2014-06-12T23:55:00Z /home/bradh/ddf-catalog-dump 1 file(s) dumped in 0.015 seconds ddf@local>// Can leave off timezone, but default (local time on server) may not match what you expect! ddf@local>catalog:dump --modified-after 2014-06-13T09:50:00 --modified-before 2014-06-13T09:55:00 /home/bradh/ddf-catalog-dump 1 file(s) dumped in 0.018 seconds ddf@local>// Can leave off trailing minutes / seconds ddf@local>catalog:dump --modified-after 2014-06-13T09 --modified-before 2014-06-13T09:55 /home/bradh/ddf-catalog-dump 2 file(s) dumped in 0.024 seconds ddf@local>// Can use year and day number ddf@local>catalog:dump --modified-after 2014-164T09:50:00 /home/bradh/ddf-catalog-dump 2 file(s) dumped in 0.027 seconds
Known Command Issues
| Issue | Description |
|---|---|
Ingest more than 200,000 data files stored NFS shares may cause Java Heap Space error (Linux-only issue). |
This is an NFS bug where it creates duplicate entries for some files when doing a file list. Depend on the OS, some Linux machines can handle the bug better and able get a list of files but get an incorrect number of files. Others would have a Java Heap Space error because there are too many file to list. |
Ingest millions of complicated data into Solr can cause Java heap space error. |
Complicated data has spatial types and large text. |
Ingest serialized data file with scientific notation in WKT string causes RuntimeException. |
WKT string with scientific notation such as POINT (-34.8932113039107 -4.77974239601E-5) won’t ingest. This occurs with serialized data format only. |
Command Scheduler
Command Scheduler is a capability exposed through the Admin Console (https://localhost:8993/admin) that allows administrators to schedule Command Line Shell Commands to be run at specified intervals.
Usage
The Command Scheduler allows administrators to schedule Command Line Shell Commands to be run in a "platform-independent" method. For instance, if an administrator wanted to use the Catalog commands to export all records of a Catalog to a directory, the administrator could write a cron job or a scheduled task to remote into the container and execute the command. Writing these types of scripts are specific to the administrator’s operating system and also requires extra logic for error handling if the container is up. The administrator can also create a Command Schedule, which currently requires only two fields. The Command Scheduler only runs when the container is running, so there is no need to verify if the container is up. In addition, when the container is restarted, the commands are rescheduled and executed again.
Schedule a Command
-
Navigate to the Admin Console (https://localhost:8993/admin).
-
Select DDF Platform
-
Select Platform Command Scheduler.
-
Type the command or commands to be executed in the Command text field. Commands can be separated by a semicolon and will execute in order from left to right.
-
Type in a positive integer for the Interval In Seconds field.
-
Select the Save button. Once the Save button is selected, the command is executed immediately. It’s next scheduled execution begins after the amount of seconds specified in the Interval In Seconds field and repeats indefinitely until the container is shut down or the scheduled command is deleted.
|
Scheduled Commands can be updated and deleted. To delete, clear the fields and click save. To update, modify the fields and click Save. |
Command Output
Commands that normally write out to the console will write out to the distribution’s log. For example, if an echo "Hello World" command is set to run every five seconds, the log displays the following:
16:01:32,582 | INFO | heduler_Worker-1 | ddf.platform.scheduler.CommandJob 68 | platform-scheduler | Executing command [echo Hello World] 16:01:32,583 | INFO | heduler_Worker-1 | ddf.platform.scheduler.CommandJob 70 | platform-scheduler | Execution Output: Hello World 16:01:37,581 | INFO | heduler_Worker-4 | ddf.platform.scheduler.CommandJob 68 | platform-scheduler | Executing command [echo Hello World] 16:01:37,582 | INFO | heduler_Worker-4 | ddf.platform.scheduler.CommandJob 70 | platform-scheduler | Execution Output: Hello World
In short, administrators can view the status of a run within the log as long as INFO was set as the status level.
Subscriptions Commands
| Title | Namespace | Description |
|---|---|---|
DDF :: Catalog :: Core :: PubSub Commands |
subscriptions |
The DDF PubSub shell commands provide functions to list the registered subscriptions in DDF and to delete subscriptions. |
|
The subscriptions commands are installed when the Catalog application is installed. |
Commands
ddf@local>subscriptions: subscriptions:delete subscriptions:list
Command Descriptions
| Command | Description |
|---|---|
delete |
Deletes the subscription(s) specified by the search phrase or LDAP filter. |
list |
List the subscription(s) specified by the search phrase or LDAP filter. |
List Available System Console Commands
To get a list of commands, type the namespace of the desired extension the press the Tab key.
For example, type subscriptions then press Tab.
System Console Command Help
For details on any command type help then the subscriptions command. For example, help subscriptions:list displays the data in the following table.
ddf@local>help subscriptions:list
DESCRIPTION
subscriptions:list
Allows users to view registered subscriptions.
SYNTAX
subscriptions:list [options] [search phrase or LDAP filter]
ARGUMENTS
search phrase or LDAP filter
Subscription ID to search for. Wildcard characters (*) can be used in the ID, e.g., my*name or *123. If an id is not provided, then
all of the subscriptions are displayed.
OPTIONS
filter, -f
Allows user to specify any type of LDAP filter rather than searching on single subscription ID.
You should enclose the LDAP filter in quotes since it will often have special characters in it.
An example LDAP filter would be:
(& (subscription-id=my*) (subscription-id=*169*))
which searches for all subscriptions starting with "my" and having 169 in the ID, which can be thought of as part of an IP address.
An example of the entire quote command would be:
subscriptions:list -f ""(& (subscription-id=my*) (subscription-id=*169*))"
--help
Display this help message
The help command provides a description of the command, along with the syntax on how to use it, arguments it accepts, and available options.
subscriptions:list Command Usage Examples
Note that no arguments are required for the subscriptions:list command. If no argument is provided, all subscriptions will be listed. A count of the subscriptions found matching the list command’s search phrase (or LDAP filter) is displayed first followed by each subscription’s ID.
List All Subscriptions
ddf@local>subscriptions:list Total subscriptions found: 3 Subscription ID my.contextual.id.v20|http://172.18.14.169:8088/mockCatalogEventConsumerBinding?WSDL my.contextual.id.v30|http://172.18.14.169:8088/mockEventConsumerBinding?WSDL my.contextual.id.json|http://172.18.14.169:8088/services/json/local/event/notification
List a Specific Subscription by ID
ddf@local>subscriptions:list "my.contextual.id.v20|http://172.18.14.169:8088/mockCatalogEventConsumerBinding?WSDL" Total subscriptions found: 1 Subscription ID my.contextual.id.v20|http://172.18.14.169:8088/mockCatalogEventConsumerBinding?WSDL
|
It is recommended to always quote the search phrase (or LDAP filter) argument to the command so that any special characters are properly processed. |
List Subscriptions Using Wildcards
ddf@local>subscriptions:list "my*" Total subscriptions found: 3 Subscription ID my.contextual.id.v20|http://172.18.14.169:8088/mockCatalogEventConsumerBinding?WSDL my.contextual.id.v30|http://172.18.14.169:8088/mockEventConsumerBinding?WSDL my.contextual.id.json|http://172.18.14.169:8088/services/json/local/event/notification ddf@local>subscriptions:list "*json*" Total subscriptions found: 1 Subscription ID my.contextual.id.json|http://172.18.14.169:8088/services/json/local/event/notification ddf@local>subscriptions:list "*WSDL" Total subscriptions found: 2 Subscription ID my.contextual.id.v20|http://172.18.14.169:8088/mockCatalogEventConsumerBinding?WSDL my.contextual.id.v30|http://172.18.14.169:8088/mockEventConsumerBinding?WSDL
List Subscriptions Using an LDAP Filter
The example below illustrates searching for any subscription that has "json" or "v20" anywhere in its subscription ID.
ddf@local>subscriptions:list -f "(|(subscription-id=*json*) (subscription-id=*v20*))" Total subscriptions found: 2 Subscription ID my.contextual.id.v20|http://172.18.14.169:8088/mockCatalogEventConsumerBinding?WSDL my.contextual.id.json|http://172.18.14.169:8088/services/json/local/event/notification
The example below illustrates searching for any subscription that has "json" and "172.18.14.169" in its subscription ID. This could be a handy way of finding all subscriptions for a specific site.
ddf@local>subscriptions:list -f "(&(subscription-id=*json*) (subscription-id=*172.18.14.169*))" Total subscriptions found: 1 Subscription ID my.contextual.id.json|http://172.18.14.169:8088/services/json/local/event/notification
subscriptions:delete Command Usage Example
The arguments for the subscriptions:delete command are the same as for the list command, except that a search phrase or LDAP filter must be specified. If one of these is not specified an error will be displayed.
When the delete command is executed it will display each subscription ID it is deleting. If a subscription matches the search phrase but cannot be deleted, a message in red will be displayed with the ID. After all matching subscriptions are processed, a summary line is displayed indicating how many subscriptions were deleted out of how many matching subscriptions were found.
Delete a Specific Subscription Using Its Exact ID
ddf@local>subscriptions:delete "my.contextual.id.json|http://172.18.14.169:8088/services/json/local/event/notification" Deleted subscription for ID = my.contextual.id.json|http://172.18.14.169:8088/services/json/local/event/notification Deleted 1 subscriptions out of 1 subscriptions found.
Delete Subscriptions Using Wildcards
ddf@local>subscriptions:delete "my*" Deleted subscription for ID = my.contextual.id.v20|http://172.18.14.169:8088/mockCatalogEventConsumerBinding?WSDL Deleted subscription for ID = my.contextual.id.v30|http://172.18.14.169:8088/mockEventConsumerBinding?WSDL Deleted 2 subscriptions out of 2 subscriptions found. ddf@local>subscriptions:delete "*json*" Deleted subscription for ID = my.contextual.id.json|http://172.18.14.169:8088/services/json/local/event/notification Deleted 1 subscriptions out of 1 subscriptions found.
Delete All Subscriptions
ddf@local>subscriptions:delete * Deleted subscription for ID = my.contextual.id.v30|http://172.18.14.169:8088/mockEventConsumerBinding?WSDL Deleted subscription for ID = my.contextual.id.v20|http://172.18.14.169:8088/mockCatalogEventConsumerBinding?WSDL Deleted subscription for ID = my.contextual.id.json|http://172.18.14.169:8088/services/json/local/event/notification Deleted 3 subscriptions out of 3 subscriptions found.
Delete Subscriptions Using an LDAP Filter
ddf@local>subscriptions:delete -f "(&(subscription-id=*WSDL) (subscription-id=*172.18.14.169*))" Deleted subscription for ID = my.contextual.id.v20|http://172.18.14.169:8088/mockCatalogEventConsumerBinding?WSDL Deleted subscription for ID = my.contextual.id.v30|http://172.18.14.169:8088/mockEventConsumerBinding?WSDL Deleted 2 subscriptions out of 2 subscriptions found.
Platform Commands
| Title | Namespace | Description |
|---|---|---|
DDF Platform Commands |
platform |
The DDF Platform Shell Commands provide generic platform management functions |
|
The Platform Commands are installed when the Platform application is installed. |
Commands
Command Descriptions
ddf@local>platform: platform:describe platform:envlist
| Command | Description |
|---|---|
describe |
Shows the current platform configuration. |
envlist |
Provides a list of environment variables. |
List Available System Console Commands
To view a list of commands, type the namespace of the desired extension and press the Tab key.
For example, type platform then press Tab.
System Console Command Help
For details on any command type help followed by the platform command. For example, help platform:envlist
Example Help
ddf@local>help platform:envlist
DESCRIPTION
platform:envlist
Provides a list of environment variables
SYNTAX
platform:envlist [options]
OPTIONS
--help
Display this help message
The help command provides a description of the provided command, along with the syntax in how to use it, arguments it accepts, and available options.
Persistence Commands
| Title | Namespace | Description |
|---|---|---|
DDF:: Persistence :: Core :: Commands |
store |
The Persistence Shell Commands are meant to be used with any PersistentStore implementations. They provide the ability to query and delete entries from the persistence store. |
Commands
store:delete store:list
Command Descriptions
Command |
Description |
delete |
Delete entries from the persistence store that match a given CQL statement |
list |
Lists entries that are stored in the persistence store. |
Available System Console Commands
To get a list of commands, type in the namespace of the desired extension then press the Tab key.
For example, type store, then press Tab.
System Console Command Help
For details on any command, type help then the command. For example, help store:list (see results of this command in the example below).
Example Help
ddf@local>help store:list
DESCRIPTION
store:list
Lists entries that are available in the persistent store.
SYNTAX
store:list [options]
OPTIONS
User ID, -u, --user
User ID to search for notifications. If an id is not provided, then all of the notifications for all users are displayed.
--help
Display this help message
Persistence Type, -t, --type
Type of item to retrieve from the persistence store.
Options: metacard, saved_query, notification, task, or workspace
CQL, -c, --cql
OGC CQL statement to query the persistence store. Not specifying returns all entries. More information on CQL is available at:
http://docs.geoserver.org/stable/en/user/tutorials/cql/cql_tutorial.html
The help command provides a description of the provided command, along with the syntax in how to use it, arguments it accepts, and available options.
CQL Syntax
The CQL syntax used should follow the OGC CQL format. Examples and a description of the grammar is located at http://docs.geoserver.org/stable/en/user/tutorials/cql/cql_tutorial.html.
Examples
Finding all notifications that were sent due to a download: ddf@local>store:list --cql "application='Downloads'" --type notification Deleting a specific notification: ddf@local>store:delete --cql "id='fdc150b157754138a997fe7143a98cfa'" --type notification
Ingesting Data
Ingesting is the process of getting metadata into the Catalog Framework (including via the Content Framework). Ingested files are "transformed" into a neutral format that can be search against as well as migrated to other formats and systems. There are multiple methods available for ingesting files into the DDF.
File types supported
DDF supports a wide variety of file types and data types for ingest. The DDF’s internal Input Transformers extract the necessary data into a generalized format. DDF supports ingest of many datatypes and commonly use file formats, such as Microsoft office products: Word documents, Excel spreadsheets, and PowerPoint presentations as well as .pdf files, GeoJson and others.
Methods of Ingest
Easy (for fewer records or manual ingesting)
Ingest command (console)
The DDF console application has a command line option for ingesting files
Usage
The syntax for the ingest command is "ingest" -t <transformer type> <file path relative to the installation path.
For XML data, run this command:
ingest -t xml examples/metacards/xml
Directory Monitor
The DDF Content application contains a Directory Monitor feature that allows files placed in a single directory to be monitored and ingested automatically. For more information about configuring a directory to be monitored, consult Directory Monitor.
Usage
Simply place the desired files in the monitored directory and it will be ingested automatically. If, for any reason, the files cannot be ingested, they will be moved to an automatically created sub-folder named .errors. Optionally, ingested files can be automatically moved to a sub-folder called .ingested.
Medium
External Methods
Several third-party tools, such as curl.exe and the Chrome Advanced Rest Client, can be used to send files and other types of data to DDF for ingest.
Advanced (more records, automated ingest)
The DDF provides endpoints for both REST and SOAP services, allowing integration with other data systems and the ability to further automate ingesting data into the catalog. For further information, see Integrating Endpoints.
Troubleshooting
Exception Starting DDF (Windows)
Problem:
An exception is sometimes thrown starting DDF on a Windows machine (x86).
If using an unsupported terminal, java.lang.NoClassDefFoundError: Could not initialize class org.fusesource.jansi.internal.Kernel32 is thrown.
Solution:
Install missing Windows libraries.
Some Windows platforms are missing libraries that are required by DDF. These libraries are provided by the Microsoft Visual C++ 2008 Redistributable Package x64 (http://www.microsoft.com/en-us/download/details.aspx?id=15336).
Blank Web Console
Problem:
https://localhost:8993/system/console opens as a blank page.
Solution:
Restart DDF.
. Shut down DDF from the console:
ddf@local>shutdown
. Start DDF back up:
./ddf
. Verify that all of the files were copied over correctly during the deploy bundles step.
CXF BusException
Problem:
The following exception is thrown:
org.apache.cxf.BusException: No conduit initiator
Solution:
Restart DDF.
. Shut down DDF:
ddf@local>shutdown
. Start up DDF:
./ddf
Distribution Will Not Start
Problem:
DDF will not start when calling the start script defined during installation.
Solution:
Complete the following procedure. . Verify that Java is correctly installed.
+
java -version
. This should return something similar to:
+
java version "1.8.0_45" Java™ SE Runtime Environment (build 1.8.0_45-b14) Java HotSpot™ Server VM (build 25.45-b02, mixed mode)
. If running *nix, verify that bash is installed.
+
echo $SHELL
. This should return:
+
/bin/bash
DDF Is Unresponsive to Incoming Requests
Problem:
DDF is unresponsive to incoming requests.
An example of the log file when this problem is encountered:
Feb 7, 2013 10:51:33 AM org.apache.karaf.main.SimpleFileLock lock INFO: locking Feb 7, 2013 10:51:33 AM org.apache.karaf.main.Main doLock INFO: Waiting for the lock ... Feb 7, 2013 10:51:33 AM org.apache.karaf.main.SimpleFileLock lock INFO: locking Feb 7, 2013 10:51:33 AM org.apache.karaf.main.Main doLock INFO: Waiting for the lock ... Feb 7, 2013 10:51:34 AM org.apache.karaf.main.SimpleFileLock lock INFO: locking Feb 7, 2013 10:51:34 AM org.apache.karaf.main.SimpleFileLock lock INFO: locking Feb 7, 2013 10:51:35 AM org.apache.karaf.main.SimpleFileLock lock INFO: locking Feb 7, 2013 10:51:35 AM org.apache.karaf.main.SimpleFileLock lock INFO: locking
Symptoms
Multiple java.exe processes running, indicating more than one DDF instance is running. This can be caused when another DDF is not shut down.
Solutions:
Perform one or all of the following recommended solutions, as necessary.
-
Wait for proper shutdown of DDF prior to starting a new instance.
-
Verify running java.exe are not DDF (e.g., kill/close if necessary).
-
Utilize automated start/stop scripts to run DDF as a service.
Overview
The DDF Admin Application contains components that are responsible for the installation and configuration of DDF and other DDF applications.
The administrative application enhances administrative capabilities when installing and managing DDF. It contains various services and interfaces that allow administrators more control over their systems.
The Admin application contains an application service that handles all operations that are performed on applications. This includes adding, removing, starting, stopping, and showing status.
Administrative User Interface
The Admin UI is the centralized location for administering the system. The Admin UI allows an administrator to install and remove selected applications and their dependencies and access configuration pages to configure and tailor system services and properties
Modules
The Admin UI is a modular system that can be expanded with additional modules as necessary. DDF comes with the Configurations module and the Installation modules. However, new modules can be added, and each module is presented in its own tab of the Admin UI. More information on modules, including the ones that come with DDF, is available on the Modules page.
Prerequisites
Before the DDF Admin application can be installed:
-
the DDF Kernel must be running
-
the DDF Platform Application must be installed
-
the DDF Catalog Application must be installed
Install Applications
-
Before installing a DDF application, verify that its prerequisites have been met.
-
Copy the DDF application’s KAR file to the
<INSTALL_DIRECTORY>/deploydirectory.
Verify
-
Verify the appropriate features for the DDF application have been installed using the
features:listcommand to view the KAR file’s features. -
Verify that the bundles within the installed features are in an active state.
Uninstall Applications
|
It is very important to save the KAR file or the feature repository URL for the application prior to an uninstall so that the uninstall can be reverted if necessary. |
If the DDF application is deployed on the DDF Kernel in a custom installation (or the application has been upgraded previously), i.e., its KAR file is in <INSTALL_DIRECTORY>/deploy the directory, uninstall it by deleting this KAR file.
Otherwise, if the DDF application is running as part of the DDF distribution zip, it is uninstalled the first time and only the first time using the features:removeurl command:
features:removeurl -u <DDF application's feature repository URL> Example: features:removeurl -u mvn:ddf.catalog/catalog-app/2.3.0/xml/features
The uninstall of the application can be verified by the absence of any of the DDF application’s features in the features:list command output.
Revert the Uninstall
If the uninstall of the DDF application needs to be reverted, this is accomplished by either:
-
copying the application’s KAR file previously in the
<INSTALL_DIRECTORY>/deploydirectory, OR -
adding the application’s feature repository back into DDF and installing its main feature, which typically is of the form
<applicationName>-app, e.g.,catalog-app.
features:addurl <DDF application's feature repository URL>
features:install <DDF application's main feature>
Example:
ddf@local>features:addurl mvn:ddf.catalog/catalog-app/2.3.0/xml/features
ddf@local>features:install catalog-app
Upgrade
To upgrade an application, complete the following procedure.
-
Uninstall the application by following the Uninstall Applications instructions above.
-
Install the new application KAR file by copying the
admin-app-X.Y.karfile to the<INSTALL_DIRECTORY>/deploydirectory. .Start the application.
features:install admin-app
Modules
Modules are single components that implement the org.codice.ddf.ui.admin.api.module.AdminModule interface. Once they implement and expose themselves as a service, they are added in to the Admin UI as a new tab.
Included Modules
Installer Module
The application installer module enables a user to install and remove applications. Each application includes a features file that provides a description of the application and a list of the dependencies required to successfully run that application. The installer reads the features file and presents the applications in a manner that allows the administrator to visualize these dependencies. As applications are selected or deselected, the corresponding dependent applications are selected or deselected as necessary.
Set Up the Module
-
Install the module if it is not already pre-installed.
features:install admin-modules-installer -
Open a web browser and navigate to the Installation page.
http://DDF_HOST:DDF_PORT/admin -
Log in with the default username of "admin" (no quotes) and the default password of "admin" (no quotes).
-
Select the Installation tab if not already selected.
UI Basics
|
Do NOT deselect/uninstall the Platform App or the Admin App. Doing so will disable the use of this installer and the ability to install/uninstall other applications. |
-
Installation Profile Page
-
When a profile is selected, it will auto select applications on the Select Application Page and install them automatically.
-
If choose to customize a profile, you will be given the options to manually selected the applications on the Select Application Page.
-
-
In the Select applications to install page, hover over each application to view additional details about the application.
-
New applications can be added and existing applications can be upgraded using the Applications Module.
-
When an application is selected, dependent applications will automatically be selected.
-
When an application is unselected, dependent applications will automatically be unselected.
-
If apps are preselected when the Select applications to install page is reached, they will be uninstalled if unselected.
-
Applications can also be installed using kar deployment as stated in Application Installation.
Display the Features File in the Installer
To ensure that the installer can correctly interpret and display application details, there are several guidelines that should be followed when creating the features file for the application.
-
Be sure that only one feature (in the features.xml) has the auto-install tag (install='auto'). This is the feature that the installer displays to the user (name, description, version, etc.). It is typically named after the application itself, and the description provides a complete application description.
-
Verify that the one feature specified to auto-install has a complete list of all of its dependencies to ensure the dependency tree can be constructed correctly.
Example Screenshots
The following are examples of what the Installation Steps/Pages look like:
Welcome Page
General Configuration Page
Installation Profile Page
|
Platform App, Admin App, and Security Services App CANNOT be selected or unselected as it is installed by default and can cause errors if removed. **Security Services App appears to be unselected upon first view of the tree structure, but it is in fact automatically installed with a later part of the installation process. |
Final Page
Configuration Module
The configuration module allows administrators to change bundle and service configurations.
Set Up the Module
-
Install the module if it is not pre-installed.
features:install admin-modules-configuration -
Open a web browser and navigate to the Admin UI page.
-
Select the Configurations tab if not already selected.
Configurations Tab
(IMG)
Admin Console Access Control
If you have integrated DDF with your existing security infrastructure, then you may want to limit access to parts of the DDF based on user roles/groups.
Restricting DDF Access
-
See the documentation for your specific security infrastructure to configure users, roles, and groups.
-
On the
/system/console/configMgr, select the Web Context Policy Manager. (IMG)-
A dialogue will pop up that allows you to edit DDF access restrictions.
-
Once you have configured your realms in your security infrastructure, you can associate them with DDF contexts.
-
If your infrastructure supports multiple authentication methods, they may be specified on a per-context basis.
-
Role requirements may be enforced by configuring the required attributes for a given context.
-
The whitelist allows child contexts to be excluded from the authentication constraints of their parents.
-
Overview
The DDF Catalog provides a framework for storing, searching, processing, and transforming information. Clients typically perform query, create, read, update, and delete (QCRUD) operations against the Catalog. At the core of the Catalog functionality is the Catalog Framework, which routes all requests and responses through the system, invoking additional processing per the system configuration.
Catalog Architecture
The DDF Catalog provides a framework for storing, searching, processing, and transforming information. Clients typically perform query, create, read, update, and delete (QCRUD) operations against the Catalog. At the core of the Catalog functionality is the Integrating Catalog Framework, which routes all requests and responses through the system, invoking additional processing per the system configuration.
Catalog Architecture
Installing and Uninstalling
Prerequisites
Before the DDF Catalog application can be installed:
-
the DDF Kernel must be running
-
the DDF Platform application must be installed
Installing
-
Before installing a DDF application, verify that its prerequisites have been met.
-
Copy the DDF application’s KAR file to the
<INSTALL_DIRECTORY>/deploydirectory.
Verifying
-
Verify the appropriate features for the DDF application have been installed using the
features:listcommand to view the KAR file’s features. -
Verify that the bundles within the installed features are in an active state.
Uninstalling
|
It is very important to save the KAR file or the feature repository URL for the application prior to an uninstall so that the uninstall can be reverted if necessary. |
If the DDF application is deployed on the DDF Kernel in a custom installation (or the application has been upgraded previously), i.e., its KAR file is in the <INSTALL_DIRECTORY>/deploy directory, uninstall it by deleting this KAR file.
Otherwise, if the DDF application is running as part of the DDF distribution zip, it is uninstalled the first time and only the first time using the features:removeurl command:
features:removeurl -u <DDF application's feature repository URL> Example: features:removeurl -u mvn:ddf.catalog/catalog-app/2.3.0/xml/features
The uninstall of the application can be verified by the absence of any of the DDF application’s features in the features:list command output.
|
The repository URLs for installed applications can be obtained by entering:
|
Revert the Uninstall
If the uninstall of the DDF application needs to be reverted, this is accomplished by either:
-
copying the application’s KAR file previously in the
<INSTALL_DIRECTORY>/deploydirectory, OR -
adding the application’s feature repository back into DDF and installing its main feature, which typically is of the form
<applicationName>-app, e.g.,catalog-app.
features:addurl <DDF application's feature repository URL>
features:install <DDF application's main feature>
Example:
ddf@local>features:addurl mvn:ddf.catalog/catalog-app/2.3.0/xml/features
ddf@local>features:install catalog-app
Upgrade
To upgrade an application, complete the following procedure.
-
Uninstall the application by following the Uninstall Applications instructions above.
-
Install the new application KAR file by copying the admin-app-X.Y.kar file to the
<INSTALL_DIRECTORY>/deploydirectory. -
Start the application.
features:install admin-app -
Complete the steps in the Verify section above to determine if the upgrade was successful.
Federation UI
The federation user interface is a convenient way to manage federated data sources for the DDF.
Federation enables including remote sources, including other DDF installations in queries. For a full description of Federation, see Extending Federation.
Installing
The Federation UI is installed by default.
Configuring
No configuration is required.
Using
-
Go to the admin ui at
http://localhost:8181/admin/index.html. -
Open DDF-Catalog Application
Adding a Source
-
Press add button.
(IMG) -
Name the source.
(IMG) -
Choose source type. The type of source selected will determine the options to configure.+ (IMG)
Editing a Source
-
Click the name of the source to edit.
-
Update relevant properties.
(IMG) -
Click save.
Enabling/Disabling a Source
-
Select the drop down menu for the source under the heading Type.
(IMG) -
Set to enabled/disabled.
Removing a Source
-
Click the delete source icon.
(IMG) -
Check box next to the source to delete.+ (IMG)
-
Click delete.
Overview
The DDF Content application provides a framework for storing, reading, processing, transforming and cataloging data. This guide documents the installation, maintenance, and support of this application.
Prerequisites
Before the DDF Content application can be installed, the following prerequisites must be met:
-
the DDF Kernel must be running,
-
the DDF Platform Application must be installed, and
-
the DDF Catalog Application must be installed.
The Content application will continue to function properly as a content store without the Catalog application. However, the Content application will not support creating metacards for ingested content. In addition, without the Catalog application, the Content application will be displayed as FAILED by the Platform Status Service and the Application Commands.
Installing
-
Before installing a DDF application, verify that its prerequisites have been met.
-
Copy the DDF application’s KAR file to the
<INSTALL_DIRECTORY>/deploydirectory.
|
These Installation steps are the same whether DDF was installed from a distribution zip or a custom installation using the DDF Kernel zip. |
Verifying
-
Verify the appropriate features for the DDF application have been installed using the
features:listcommand to view the KAR file’s features. -
Verify that the bundles within the installed features are in an active state.
Uninstalling
|
It is very important to save the KAR file or the feature repository URL for the application prior to an uninstall so that the uninstall can be reverted if necessary. |
If the DDF application is deployed on the DDF Kernel in a custom installation (or the application has been upgraded previously), i.e., its KAR file is in the <INSTALL_DIRECTORY>/deploy directory, uninstall it by deleting this KAR file.
Otherwise, if the DDF application is running as part of the DDF distribution zip, it is uninstalled the first time and only the first time using the features:removeurl command:
1
2
3
features:removeurl -u <DDF application's feature repository URL>
Example: features:removeurl -u mvn:ddf.catalog/catalog-app/2.3.0/xml/features
The uninstall of the application can be verified by the absence of any of the DDF application’s features in the features:list command output.
Reverting the Uninstall
If the uninstall of the DDF application needs to be reverted, this is accomplished by either:
-
copying the application’s KAR file previously in the
<INSTALL_DIRECTORY>/deploydirectory, OR adding the application’s feature repository back into DDF and installing its main feature, which typically is of the form<applicationName>-app, e.g.,catalog-app.
1
2
3
4
5
6
7
8
9
features:addurl <DDF application's feature repository URL>
features:install <DDF application's main feature>
Example:
ddf@local>features:addurl mvn:ddf.catalog/catalog-app/2.3.0/xml/features
ddf@local>features:install catalog-app
Upgrading
To upgrade an application, complete the following procedure.
-
Uninstall the application by following the Uninstall Applications instructions above.
-
Install the new application KAR file by copying the admin-app-X.Y.kar file to the
<INSTALL_DIRECTORY>/deploydirectory. -
Start the application.
features:install admin-app -
Complete the steps in the Verify section above to determine if the upgrade was successful.
Overview
The Platform application is considered to be a core application of the distribution. The Platform application provides the fundamental building blocks that the distribution needs to run. These building blocks include subsets of:
-
Karaf (http://karaf.apache.org/),
-
Cellar (http://karaf.apache.org/index/subprojects/cellar.html), and
-
Camel (http://camel.apache.org/).
Included as part of the Platform application is also a Command Scheduler. The Command Scheduler allows users to schedule Command Line Shell Commands to run at certain specified intervals.
Usage
The Platform application is a core building block for any application and should be referenced for its core component versions so that developers can ensure compatibility with their own applications. The Command Scheduler that is included in the Platform application should be used by those that need or like the convenience of a "platform independent" method of running certain commands, such as backing up data or logging settings. More information can be found on the Command Scheduler page.
Install and Uninstall
Prerequisites
Before the DDF Platform application can be installed:
-
the DDF Kernel must be running.
Installing
-
Before installing a DDF application, verify that its prerequisites have been met.
-
Copy the DDF application’s KAR file to the
<INSTALL_DIRECTORY>/deploydirectory.
|
These Installation steps are the same whether DDF was installed from a distribution zip or a custom installation using the DDF Kernel zip. |
Verifying
-
Verify the appropriate features for the DDF application have been installed using the
features:listcommand to view the KAR file’s features. -
Verify that the bundles within the installed features are in an active state.
Uninstalling
|
It is very important to save the KAR file or the feature repository URL for the application prior to an uninstall so that the uninstall can be reverted if necessary. |
If the DDF application is deployed on the DDF Kernel in a custom installation (or the application has been upgraded previously), i.e., its KAR file is in the <INSTALL_DIRECTORY>/deploy directory, uninstall it by deleting this KAR file.
Otherwise, if the DDF application is running as part of the DDF distribution zip, it is uninstalled the first time and only the first time using the features:removeurl command:
1
2
3
features:removeurl -u <DDF application's feature repository URL>
Example: features:removeurl -u mvn:ddf.platform/platform-app/2.4.0/xml/features
The uninstall of the application can be verified by the absence of any of the DDF application’s features in the features:list command output.
|
The repository URLs for installed applications can be obtained by entering:
|
Reverting the Uninstall
If the uninstall of the DDF application needs to be reverted, this is accomplished by either:
-
copying the application’s KAR file previously in the
<INSTALL_DIRECTORY>/deploydirectory, OR -
adding the application’s feature repository back into DDF and installing its main feature, which typically is of the form
<applicationName>-app, e.g.,platform-app.
1
2
3
4
5
6
7
features:addurl <DDF application's feature repository URL>
features:install <DDF application's main feature>
Example:
ddf@local>features:addurl mvn:ddf.platform/platform-app/2.4.0/xml/features
ddf@local>features:install platform-app
Upgrading
To upgrade an application, complete the following procedure.
. Uninstall the application by following the Uninstall Applications instructions above.
. Install the new application KAR file by copying the platform-app-X.Y.Z..kar file to the <INSTALL_DIRECTORY>/deploy directory.
. Start the application.
features:install platform-app
. Complete the steps in the Verifying section above to determine if the upgrade was successful.
Configuration
This component can be configured using the normal processes described in the Configuring DDF section. The configurable properties are accessed from the Schedule Command Configuration in the Admin Console.
Configurable Properties
| Property | Type | Description | Default Value | Required |
|---|---|---|---|---|
|
String |
Shell command to be used within the container. For example, |
|
yes |
|
Integer |
The interval of time in seconds between each execution. This must be a positive integer. For example, 3600 is 1 hour. |
|
yes |
|
The Platform application includes other third party packages, such as Apache CXF and Apache Camel. These are available for use by third party developers, but their versions can change at anytime with future releases of the Platform application. The exact versions of the third party applications that are used can be found in the Release Notes for the Platform application. |
Overview
The Security application provides authentication, authorization, and auditing services for the DDF. They comprise both a framework that developers and integrators can extend and a reference implementation that meets security requirements. More information about the security framework and how everything works as a single security solution can be found on the Managing Web Service Security page.
This page documents the installation, maintenance, and support of this application.
Install and Uninstall
Prerequisites
Before the DDF Security application can be installed:
-
the DDF Kernel must be running
-
the DDF Platform Application must be installed
Install
-
Before installing a DDF application, verify that its prerequisites have been met.
-
Copy the DDF application’s KAR file to the
<INSTALL_DIRECTORY>/deploydirectory.
|
These Installation steps are the same whether DDF was installed from a distribution zip or a custom installation using the DDF Kernel zip. |
Verify
-
Verify the appropriate features for the DDF application have been installed using the
features:listcommand to view the KAR file’s features. -
Verify that the bundles within the installed features are in an active state.
Uninstall
|
It is very important to save the KAR file or the feature repository URL for the application prior to an uninstall so that the uninstall can be reverted if necessary. |
If the DDF application is deployed on the DDF Kernel in a custom installation (or the application has been upgraded previously), i.e., its KAR file is in the <INSTALL_DIRECTORY>/deploy directory, uninstall it by deleting this KAR file.
Otherwise, if the DDF application is running as part of the DDF distribution zip, it is uninstalled the first time and only the first time using the features:removeurl command:
1
2
3
features:removeurl -u <DDF application's feature repository URL>
Example: features:removeurl -u mvn:ddf.security/security-services-app/2.4.0/xml/features
The uninstall of the application can be verified by the absence of any of the DDF application’s features in the features:list command output.
|
The repository URLs for installed applications can be obtained by entering:
|
Revert the Uninstall
If the uninstall of the DDF application needs to be reverted, this is accomplished by either:
-
copying the application’s KAR file previously in the
<INSTALL_DIRECTORY>/deploydirectory, OR -
adding the application’s feature repository back into DDF and installing its main feature, which typically is of the form
<applicationName>-app, e.g.,catalog-app.
1
2
3
4
5
6
7
features:addurl <DDF application's feature repository URL>
features:install <DDF application's main feature>
Example:
ddf@local>features:addurl mvn:ddf.catalog/catalog-app/2.3.0/xml/features
ddf@local>features:install catalog-app
Upgrade
To upgrade an application, complete the following procedure.
-
Uninstall the application by following the Uninstall Applications instructions above.
-
Install the new application KAR file by copying the admin-app-X.Y.kar file to the
<INSTALL_DIRECTORY>/deploydirectory. -
Start the application.
features:install admin-app -
Complete the steps in the Verify section above to determine if the upgrade was successful.
Configuration
This component can be configured using the normal processes described in the Configuration section. Within the pages for each of the applications are specific instructions on the configurations for the bundles and any additional information that may help decide how the configuration should be set for use cases.
Whitelist
The following packages have been exported by the DDF Security application and are approved for use by third parties:
-
ddf.security.expansion
-
ddf.security.sts.client.configuration
-
ddf.security.common.callback
-
ddf.security.common.util
Applications
-
Security Core
-
Security CAS
-
Security Encryption
-
Security PEP
-
Security PDP
-
Security STS
Overview
The DDF Spatial Application provides KML transformer and a KML network link endpoint that allows a user to generate a View-based KML Query Results Network Link.
This page describes:
-
which applications must be installed prior to installing this application.
-
how to install the DDF Spatial Application.
-
how to verify if the application was successfully installed.
-
how to uninstall the application.
-
how to upgrade the application.
-
the optional features available in the application.
-
the console commands that come with the application.
Prerequisites
Before the DDF Spatial Application can be installed:
-
the DDF Kernel must be running
-
the DDF Platform Application must be installed
-
the DDF Catalog Application must be installed
Installing
-
Before installing a DDF application, verify that its prerequisites have been met.
-
Copy the DDF application’s KAR file to the
<INSTALL_DIRECTORY>/deploydirectory.
|
These Installation steps are the same whether DDF was installed from a distribution zip or a custom installation using the DDF Kernel zip. |
Verifying
-
Verify the appropriate features for the DDF application have been installed using the
features:listcommand to view the KAR file’s features. -
Verify that the bundles within the installed features are in an active state.
Uninstalling
|
It is very important to save the KAR file or the feature repository URL for the application prior to an uninstall so that the uninstall can be reverted if necessary. |
If the DDF application is deployed on the DDF Kernel in a custom installation (or the application has been upgraded previously), i.e., its KAR file is in the <INSTALL_DIRECTORY>/deploy directory, uninstall it by deleting this KAR file.
Otherwise, if the DDF application is running as part of the DDF distribution zip, it is uninstalled the first time and only the first time using the features:removeurl command:
features:removeurl -u <Application's feature repository URL> Example: features:removeurl -u mvn:org.codice.ddf.spatial/spatial-app/2.5.0/xml/features
The uninstall of the application can be verified by the absence of any of the DDF application’s features in the features:list command output.
|
The repository URLs for installed applications can be obtained by entering:
|
Reverting the Uninstall
If the uninstall of the DDF application needs to be reverted, this is accomplished by either:
-
copying the application’s KAR file previously in the
<INSTALL_DIRECTORY>/deploydirectory, OR -
adding the application’s feature repository back into DDF and installing its main feature, which typically is of the form
<applicationName>-app, e.g.,catalog-app.
features:addurl <Application's feature repository URL>
features:install <Application's main feature>
Example:
ddf@local>features:addurl mvn:ddf.catalog/catalog-app/2.3.0/xml/features
ddf@local>features:install catalog-app
Upgrading
To upgrade an application, complete the following procedure.
-
Uninstall the application by following the Uninstall Applications instructions above.
-
Install the new application KAR file by copying the
admin-app-X.Y.karfile to the<INSTALL_DIRECTORY>/deploydirectory. -
Start the application.
-
Complete the steps in the Verify section above to determine if the upgrade was successful.
Optional Features
Offline Gazetteer Service
In the Spatial Application, you have the option to install a feature called offline-gazetteer. This feature enables you to use an offline index of GeoNames data (as an alternative to the GeoNames Web service enabled by the webservice-gazetteer feature) to perform searches via the gazetteer search box in the Search UI.
To use the offline gazetteer, you will need to create an index. To do so, you’ll need to use the geonames:update command which is explained in the next section.
Console Commands
GeoNames Commands
| Title | Namespace | Description |
|---|---|---|
DDF :: Spatial :: Commands |
|
The |
Commands
geonames:update
Command Descriptions
| Command | Description |
|---|---|
|
Adds new entries to an existing local GeoNames index. Entries can be manually downloaded from http://download.geonames.org/export/dump, where the absolute path of the file would be passed as an argument to the command (ex. /Users/johndoe/Downloads/AU.zip). Currently .txt and .zip files are supported for manual entries. Entries can also be automatically downloaded from http://download.geonames.org/export/dump by passing the country code as an argument to the command (ex. AU) which will add the country to the local GeoNames index. The full list of country codes available can be found in http://download.geonames.org/export/dump/countryInfo.txt. Using the argument "all" will download all of the current country codes (this process may take some time). In addition to country codes, GeoNames also provides entries for cities based on their population sizes. The arguments "cities1000", "cities5000", and "cities15000" will add cities to the index that have at least 1000, 5000, or 15000 people respectively. The index location can be configured via the Admin UI or the Felix Web Console. By default, the index location is The |
Overview
The Standard Search UI is a user interface that enables users to search a catalog and associated sites for content and metadata.
This page describes:
-
Which applications must be installed prior to installing this application.
-
How to install the DDF Standard Search UI.
-
How to verify if the DDF Standard Search UI was successfully installed.
-
How to uninstall the DDF Standard Search UI.
-
How to upgrade the DDF Standard Search UI.
Prerequisites
Before the DDF Search UI application can be installed:
-
the DDF Kernel must be running.
-
the DDF Platform Application must be installed.
-
the DDF Catalog Application must be installed.
Installing
The Search UI application is installed by default
If using the Admin application, this app can be installed via the Admin Console or the System Console (at http://localhost:8181/system/console/features). Otherwise, follow steps below.
-
Before installing a DDF application, verify that its prerequisites have been met.
-
Copy the DDF application’s KAR file to the
<INSTALL_DIRECTORY>/deploydirectory.
|
These Installation steps are the same whether DDF was installed from a distribution zip or a custom installation using the DDF Kernel zip. |
Verifying Installation
-
Verify the appropriate features for the DDF application have been installed using the
features:listcommand to view the KAR file’s features. -
Verify that the bundles within the installed features are in an active state.
Configuring
Configure individual features within the application with the Admin Console.
Configurable Properties
Search UI Endpoint
| Title | Property | Type | Description | Required |
|---|---|---|---|---|
Disable Cache |
cacheDisabled |
Boolean |
Disables use of cache. |
no |
Disable Normalization |
normalizationDisabled |
Boolean |
Disables relevance and distance normalization. |
no |
Standard Search UI
| Title | Property | Type | Description | Required |
|---|---|---|---|---|
Header |
header |
String |
The header text to be rendered on the Search UI. |
no |
Footer |
footer |
String |
The footer text to be rendered on the Search UI. |
no |
Style |
style |
String |
The style name (background color) of the Header and Footer. |
yes |
Text Color |
textColor |
String |
The text color of the Heater and Footer. |
yes |
Result count |
resultCount |
Integer |
The max number of results to display. |
yes |
Imagery Providers |
imageryProviders |
String |
List of imagery providers to use. Valid types are: Example: TYPE={url= {"type" "WMS" "url" "http://example.com" "layers" ["layer1" "layer2"] "parameters" {"FORMAT" "image/png" "VERSION" "1.1.1"} "alpha" 0.5}} |
no |
Terrain Providers |
terrainProvider |
String |
Terrain provider to use for height data. Valid types are: Example: |
no |
Map Projection |
projection |
String |
Projection of imagery providers |
no |
Connection timeout |
timeout |
Integer |
The WMS connection timeout. |
yes |
Show sign in |
signIn |
Boolean |
Whether or not to authenticate users. |
no |
Show tasks |
task |
Boolean |
Whether or not to display progress of background tasks. |
no |
Show Gazetteer |
gazetteer |
Boolean |
Whether or not to show gazetteer for searching place names. |
no |
Show Uploader |
ingest |
Boolean |
Whether or not to show upload menu for adding new metadata. |
no |
Type Name Mapping |
typeNameMapping |
String[] |
The mapping of content types to displayed names. |
no |
Uninstalling
If using the Admin application, applications can be removed via the Admin Console.
Uninstalling manually
|
It is very important to save the KAR file or the feature repository URL for the application prior to an uninstall so that the uninstall can be reverted if necessary. |
If the DDF application is deployed on the DDF Kernel in a custom installation (or the application has been upgraded previously), i.e., its KAR file is in the <INSTALL_DIRECTORY>/deploy directory, uninstall it by deleting this KAR file.
Otherwise, if the DDF application is running as part of the DDF distribution zip, it is uninstalled the first time and only the first time using the features:removeurl command:
features:removeurl -u <DDF application's feature repository URL> Example: features:removeurl -u mvn:ddf.ui.search/search-app/2.5.0/xml/features
The uninstall of the application can be verified by the absence of any of the DDF application’s features in the features:list command output.
|
The repository URLs for installed applications can be obtained by entering:
|
Reverting the Uninstall
If the uninstall of the DDF application needs to be reverted, this is accomplished by either:
-
copying the application’s KAR file previously in the
<INSTALL_DIRECTORY>/deploydirectory, OR -
adding the application’s feature repository back into DDF and installing its main feature, which typically is of the form
<applicationName>-app, e.g.,catalog-app.
features:addurl <DDF application's feature repository URL>
features:install <DDF application's main feature>
Example:
ddf@local>features:addurl mvn:ddf.catalog/catalog-app/2.3.0/xml/features
ddf@local>features:install catalog-app
Upgrading
Upgrading to a newer version of the app can be performed by the Admin Console.
Upgrading manually
To upgrade an application, complete the following procedure.
-
Uninstall the application by following the Uninstall Applications instructions above.
-
Install the new application KAR file by copying the admin-app-X.Y.kar file to the
<INSTALL_DIRECTORY>/deploydirectory.
features:install admin-app -
Start the application.
-
Complete the steps in the Verify section above to determine if the upgrade was successful.
Troubleshooting DDF Standard Search UI
Deleted Records Are Being Displayed In The Standard Search UI’s Search Results
When queries are issued by the Standard Search UI, the query results that are returned are also cached in an internal Solr database for faster retrieval when the same query may be issued in the future. As records are deleted from the catalog provider, this Solr cache is kept in sync by also deleting the same records from the cache if they exist.
Sometimes the cache may get out of sync with the catalog provider such that records that should have been deleted are not. When this occurs, users of the Standard Search UI may see stale results since these records that should have been deleted are being returned from the cache. When this occurs records in the cache can be manually deleted using the URL commands listed below from a browser. In these command URLs, metacard_cache is the name of the Solr query cache.
-
To delete all of the records in the Solr cache:
http://localhost:8181/solr/metacard_cache/update?stream.body=<delete><query>*:*</query></delete>&commit=true
-
To delete a specific record in the Solr cache by ID (specified by the original_id_txt field):
http://localhost:8181/solr/metacard_cache/update?stream.body=<delete><query>original_id_txt:50ffd32b21254c8a90c15fccfb98f139</query></delete>&commit=true
-
To delete record(s) in the Solr cache using a query on a field in the record(s) - in this example, the title_txt field is being used with wildcards to search for any records with word remote in the title:
http://localhost:8181/solr/metacard_cache/update?stream.body=<delete><query>title_txt:*remote*</query></delete>&commit=true
Integrating DDF
Overview
Distributed Data Framework (DDF) is an agile and modular integration framework. It is primarily focused on data integration, enabling clients to insert, query and transform information from disparate data sources via the DDF Catalog. A Catalog API allows integrators to insert new capabilities at various stages throughout each operation. DDF is designed with the following architectural qualities to benefit integrators.
This page supports integrating DDF with existing applications or frameworks.
Overview
The Application Service is a service that allows components to perform operations on applications. This includes adding, removing, starting, stopping, and viewing status.
API
The Application service has multiple interfaces which are exposed on to the OSGi runtime for other applications to use. For more information on these interfaces, see Application Service Interfaces.
JMX Managed Bean
Some of the Application service API is exposed via JMX. It can either be accessed using the JMX API or from a REST-based interface created by Jolokia that comes with DDF. Here are the interfaces that are exposed in the Managed Bean:
Creates an application hierarchy tree that shows relationships between applications.
Starts an application with the given name.
Stops an application with the given name.
Adds a list of application that are specified by their URL.
Configuration Files
Support for configuration files was added to allow for an initial installation of applications on first run.
Initial Application Installation
|
This application list configuration file is only read on first start. |
To minimize the chance of accidentally installing and uninstalling application, the configuration file for installing the initial applications is only read the first time that DDF is started. The only way to change what applications are active after DDF has been started is to use the console commands. Operations can also be done with the administrator web console that comes with DDF using the Features tab and installing the main feature for the desired application. This way will be deprecated after the application module has been built for the Admin UI.
The application list file is located at DDF_HOME/etc/org.codice.ddf.admin.applicationlist.properties
Applications should be defined in a <name>=<format> syntax where location may be empty for applications that have already been added to DDF or were prepackaged with the distribution.
Examples:
# Local application: opendj-embedded # Application installed into a local maven repository: opendj-embedded=mvn:org.codice.opendj.embedded/opendj-embedded-app/1.0.1-SNAPSHOT/xml/features # Application located on the file system: opendj-embedded=file:/location/to/opendj-embedded-app-1.0.1-SNAPSHOT.kar
Applications will be started in the order they are listed in the file. If an application is listed, DDF will also attempt to install all dependencies for that application.
Defining Platform Settings
The platform settings can be configured by a file located in DDF_HOME/etc/ddf.platform.config.cfg
The settings in this file can be changed at any time during the DDF lifecycle (before installation, after, or during run-time) and will immediately update the corresponding properties for the platform global configuration.
Console Commands
The application service comes with various console commands that can be executed on the DDF system console. More information is available on the Application Commands page.
Application Service Interfaces
The Application service has multiple ways of interacting with it. This methods range from being at the coding level to operations that can be performed by administrators and end users.
Installing and Uninstalling
The Admin App installs this service by default. It is recommended to NOT uninstall the application service unless absolutely necessary.
Configuring
None.
Interface Details
The Application Service comes with several interfaces to use.
ApplicationService
The ApplicationService interface is the main class that is used to operate on applications.
This method returns a set of all applications that are installed on the system. Callers can then use the Application handle to get the name and any underlying features and bundles that this application contains.
Returns the application that has the given name.
Starts an application, including any defined dependencies in the application.
Stops an application, does not include any external transitive dependencies as they may be needed by other applications.
Adds a new application to the application list. *NOTE: This does NOT start the application.*
Removes an application that has the given URI.
This method takes in an application and returns a boolean value that says if the application is started or not. This method is generally called after retrieving a list of applications in the first method.
This method, unlike isApplicationStarted, returns a full status of an application. This status contains detailed information about the health of the application and is described in the ApplicationStatus interface section.
Creates a hierarchy tree of application nodes that show the relationship between applications.
Determine which application contains a certain feature.
Application
Name of the application. Should be unique among applications.
Retrieves all of the features that this application contains regardless if they are required.
Retrieves all of the bundles that are defined by the features and included in this application.
ApplicationStatus
Sends back the application that is associated with this status.
Returns the application's state as defined by ApplicationState.
Returns a set of Features that were required for this application but did not start correctly.
Returns a set of Bundles that were required for this application but did not start correctly.
ApplicationNode
Returns the application this node is referencing.
Returns the status for the application this node is referencing.
Returns the parent of the application.
Returns the children of this application. That is, the applications that have a requirement on this application
Implementation Details
|
A client of this service is provided as an extension to the administrative console. Information about how to use it is available on the Application Commands page. |
Imported Services
| Registered Interface | Availability | Multiple | Notes |
|---|---|---|---|
org.apache.karaf.features.FeaturesService |
required |
false |
Provided by Karaf Framework |
org.apache.karaf.bundle.core.BundleStateService |
required |
true |
Installed as part of Platform Status feature. |
Exported Services
| Registered Interface | Implementation Class | Notes |
|---|---|---|
org.codice.ddf.admin.application.service.ApplicationService |
org.codice.ddf.admin.application.service.impl.ApplicationServiceImpl |
Overview
The DDF Catalog provides a framework for storing, searching, processing, and transforming information. Clients typically perform query, create, read, update, and delete (QCRUD) operations against the Catalog. At the core of the Catalog functionality is the Catalog Framework, which routes all requests and responses through the system, invoking additional processing per the system configuration.
Design
The Catalog is composed of several components and an API that connects them together. The Catalog API is central to DDF’s architectural qualities of extensibility and flexibility. The Catalog API consists of Java interfaces that define Catalog functionality and specify interactions between components. These interfaces provide the ability for components to interact without a dependency on a particular underlying implementation, thus allowing the possibility of alternate implementations that can maintain interoperability and share developed components. As such, new capabilities can be developed independently, in a modular fashion, using the Catalog API interfaces and reused by other DDF installations.
Ensuring Compatibility
The Catalog API will evolve, but great care is taken to retain backwards compatibility with developed components. Compatibility is reflected in version numbers. For more information, see the Software Versioning section in the Integrator’s Guide Appendix.
This guide supports integration of the Catalog Application.
Integrating Endpoints
Endpoints act as a proxy between the client and the Catalog Framework.
Endpoints expose the Catalog Framework to clients using protocols and formats that they understand.
Endpoint interface formats/protocols can include a variety of formats, including (but not limited to):
-
SOAP Web services
-
RESTful services
-
JMS
-
JSON
-
OpenSearch
The endpoint may transform a client request into a compatible Catalog format and then transform the response into a compatible client format. Endpoints may use Transformers to perform these transformations. This allows an endpoint to interact with Source(s) that have different interfaces. For example, an OpenSearch Endpoint can send a query to the Catalog Framework, which could then query a federated source that has noOpenSearch interface.
Endpoints are meant to be the only client-accessible components in the Catalog.
Existing Endpoints
The following endpoints are provided with the default Catalog out of the box:
DDF Catalog RESTful CRUD Endpoint
The Catalog REST Endpoint allows clients to perform CRUD operations on the Catalog using REST, a simple architectural style that performs communication using HTTP. The URL exposing the REST functionality is located at http://<HOST>:<PORT>/services/catalog, where HOST is the IP address of where the distribution is installed and PORT is the port number on which the distribution is listening.
Installing and Uninstalling
The RESTful CRUD Endpoint can be installed and uninstalled using the normal processes described in the Configuring DDF section.
Configuring
The RESTful CRUD Endpoint has no configurable properties. It can only be installed or uninstalled.
Using the REST CRUD Endpoint
The RESTful CRUD Endpoint provides the capability to query, create, update, and delete metacards in the catalog provider as follows:
| Operation | HTTP Request | Details | Example URL |
|---|---|---|---|
create |
HTTP POST |
HTTP request body contains the input to be ingested. See InputTransformers for more information. |
|
update |
HTTP PUT |
The ID of the Metacard to be updated is appended to the end of the URL. |
|
delete |
HTTP DELETE |
The ID of the Metacard to be deleted is appended to the end of the URL. |
where |
read |
HTTP GET |
The ID of the Metacard to be retrieved is appended to the end of the URL. |
|
federated read |
HTTP GET |
The SOURCE ID of a federated source is appended in the URL before the ID of the Metacard to be retrieved is appended to the end. |
|
sources |
HTTP GET |
Retrieves information about federated sources, including sourceid, availability, contentTypes,and version. |
|
Sources Operation Example
In the example below there is the local DDF distribution and a DDF OpenSearch federated source with id "DDF-OS".
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[
{
"id" : "DDF-OS",
"available" : true,
"contentTypes" :
[
],
"version" : "2.0"
},
{
"id" : "ddf.distribution",
"available" : true,
"contentTypes" :
[
],
"version" : "2.5.0-SNAPSHOT"
}
]
Note that for all RESTful CRUD commands only one metacard ID is supported in the URL, i.e., bulk operations are not supported.
Interacting with the REST CRUD Endpoint
Any web browser can be used to perform a REST read. Various other tools and libraries can be used to perform the other HTTP operations on the REST endpoint (e.g., soapUI, cURL, etc.)
Metacard Transforms with the REST CRUD Endpoint
The read operation can be used to retrieve metadata in different formats.
-
Install the appropriate feature for the desired transformer. If desired transformer is already installed such as those that come out of the box (
xml,html,etc), then skip this step. -
Make a read request to the REST URL specifying the catalog id.
-
Add a transform query parameter to the end of the URL specifying the shortname of the transformer to be used (e.g.,
transform=kml).+ Example:
http://<DISTRIBUTION_HOST>:<DISTRIBUTION_PORT>/services/catalog/<metacardId>?transform=<TRANSFORMER_ID>
|
Transforms also work on read operations for metacards in federated sources. http://<DISTRIBUTION_HOST>:<DISTRIBUTION_PORT>/services/catalog/sources/<sourceId>/<metacardId>?transform=<TRANSFORMER_ID> |
Metacard Transforms Available in DDF
Unable to render {children}. Page not found: Included Metacard Transformers.
|
MetacardTransformers can be added to the system at any time. This endpoint can make use of any registered MetacardTransformers. |
InputTransformers
This REST Endpoint uses InputTransformers to create metacards from a create or a HTTP POST operation. The REST Endpoint dynamically finds InputTransformers that support the stated in the HTTP header of a HTTP POST. InputTransformers register as Services with a list of Content-Type mime-types. The REST Endpoint receives a list of InputTransformers that match the Content-Type and one-by-one calls the InputTransformers until a transformer is successful and creates a Metacard. For instance, if GeoJSON was in the body of the HTTP POST, then the HTTP header would need to include application/json in order to match the mime-type GeoJSON Input Transformer supports.
|
InputTransformers can be added to the system at any time. |
Implementation Details
Imported Services
| Registered Interface | Availability | Multiple |
|---|---|---|
|
required |
false |
|
required |
false |
|
required |
false |
Exported Services
| Registered Interface | Service Property | Value |
|---|---|---|
ddf.action.ActionProvider |
id |
catalog.data.metacard.view |
ddf.catalog.util.DdfConfigurationWatcher |
Known Issues
None.
OpenSearch Endpoint
The OpenSearch Endpoint provides a CDR REST Search v3.0 and CDR REST Brokered Search 1.1 compliant DDF endpoint that a client accesses to send query parameters and receive search results.
This endpoint uses the input query parameters to create an OpenSearch query. The client does not need to specify all of the query parameters, only the query parameters of interest.
This endpoint is a JAX-RS RESTful service and is compliant with the CDR IPT BrokeredSearch, CDR IPT OpenSearch, and OpenSearch specifications. For more information on its parameters view the OpenSearch Description Document section below.
Installing and Uninstalling
The OpenSearch Endpoint can be installed and uninstalled using the normal processes described in the Configuring DDF section.
Configuring
The OpenSearch Endpoint has no configurable properties. It can only be installed or uninstalled.
Using the OpenSearch Endpoint
Once installed, the OpenSearch endpoint is accessible from http://<DDF_HOST>:<DDF_PORT>/services/catalog/query.
Using the endpoint
From Code:
The OpenSearch specification defines a file format to describe an OpenSearch endpoint. This file is XML-based and is used to programatically retrieve a site’s endpoint, as well as the different parameter options a site holds. The parameters are defined via the OpenSearch and CDR IPT Specifications.
From a Web Browser:
Many modern web browsers currently act as OpenSearch clients. The request call is an HTTP GET with the query options being parameters that are passed.
Example of an OpenSearch request:
http://<ddf_host>:8181/services/catalog/query?q=Predator
This request performs a full-text search for the phrase 'Predator' on the DDF providers and provides the results as Atom-formatted XML for the web browser to render.
Parameter List
Main OpenSearch Standard
| OS Element | HTTP Parameter | Possible Values | Comments |
|---|---|---|---|
searchTerms |
q |
URL-encoded string |
Complex contextual search string. |
count |
count |
integer >= 0 |
Maximum # of results to retrieve default: 10 |
startIndex |
start |
integer >= 1 |
Index of first result to return. default: 1 This value uses a one based index for the results. |
format |
format |
requires a transformer shortname as a string, possible values include, when available atom html kml see Included Query Response Transformers for more possible values. |
default: atom |
Temporal Extension
| OS Element | HTTP Parameter | Possible Values | Comments |
|---|---|---|---|
start |
dtstart |
RFC-3399-defined value |
yyyy-MM-dd’T’HH:mm:ss.SSSZZ |
end |
dtend |
RFC-3399-defined value |
yyyy-MM-dd’T’HH:mm:ss.SSSZZ |
|
The start and end temporal criteria must be of the format specified above. Other formats are currently not supported. Example: 2011-01-01T12:00:00.111-04:00. The start and end temporal elements are based on modified timestamps for a metacard. |
Geospatial Extension
These geospatial query parameters are used to create a geospatial INTERSECTS query, where INTERSECTS = geometries that are not DISJOINT of the given geospatial parameter.
| OS Element | HTTP Parameter | Possible Values | Comments |
|---|---|---|---|
lat |
lat |
EPSG:4326 decimal degrees |
Expects a latitude and a radius to be specified. |
lon |
lon |
EPSG:4326 decimal degrees |
Expects a longitude and a radius to be specified. |
radius |
radius |
Meters along the Earth’s surface > 0 |
Used in conjunction with lat and lon query parameters. |
polygon |
polygon |
clockwise lat lon pairs ending at the first one |
example: -80, -170, 0, -170, 80, -170, 80, 170, 0, 170, -80, 170, -80, -170 According to the OpenSearch Geo Specification this is deprecated. Use geometry instead. |
box |
bbox |
4 comma-separated EPSG:4326 decimal degrees |
west, south, east, north |
geometry |
geometry |
WKT Geometries: POINT, POLYGON, MULTIPOINT, MULTIPOLYGON |
Examples: POINT(10 20) where 10 is the longitude and 20 is the latitude. POLYGON ( ( 30 10, 10 20, 20 40, 40 40, 30 10 ) ). 30 is longitude and 10 is latitude for the first point. Make sure to repeat the starting point as the last point to close the polygon. |
Extensions
| OS Element | HTTP Parameter | Possible Values | Comments |
|---|---|---|---|
sort |
sort |
sbfield: 'date' or 'relevance' sborder: 'asc' or 'desc' |
sort=<sbfield>:<sborder> default: relevance:desc Sorting by date will sort the effective date. |
maxResults |
mr |
Integer >= 0 |
Maximum # of results to return. If count is also specified, the count value will take precedence over the maxResults value |
maxTimeout |
mt |
Integer > 0 |
Maximum timeout (milliseconds) for query to respond default: 300000 (5 minutes) |
Federated Search
| OS Element | HTTP Parameter | Possible Values | Comments |
|---|---|---|---|
routeTo |
src |
(varies depending on the names of the sites in the federation) |
comma delimited list of site names to query. Also can specify src=local to query the local site. If src is not provided, the default behavior is to execute an enterprise search to the entire federation. |
DDF Extensions
| OS Element | HTTP Parameter | Possible Values | Comments |
|---|---|---|---|
dateOffset |
dtoffset |
integer > 0 |
Specifies an offset, backwards from the current time, to search on the modified time field for entries. Defined in milliseconds. |
type |
type |
nitf |
Specifies the type of data to search for. |
version |
version |
20,30 |
Comma-delimited list of version values to search for. |
selector |
selector |
//namespace:example,//example |
Comma-delimited list of XPath string selectors that narrow down the search. |
Supported Complex Contextual Query Format
The OpenSearch Endpoint supports the following operators: AND, OR, and NOT. These operators are case sensitive. Implicit ANDs are also supported.
Using parenthesis to change the order of operations is supported. Using quotes to group keywords into literal expressions is supported.
The following EBNF describes the grammar used for the contextual query format.
keyword query expression = optional whitespace, term, {boolean operator, term}, optional
whitespace;
boolean operator = or | not | and;
and = (optional whitespace, "AND", optional whitespace) | mandatory whitespace;
or = (optional whitespace, "OR", optional whitespace);
not = (optional whitespace, "NOT", optional whitespace);
term = group | phrase | keyword;
phrase = optional whitespace, '"', optional whitespace, keyword, { optional whitespace,
keyword}, optional whitespace, '"';
group = optional whitespace, '(', optional whitespace, keyword query expression,
optional whitespace, ')';
optional whitespace = {' '};
mandatory whitespace = ' ', optional whitespace;
valid character = ? any printable character ? - ('"' | '(' | ')' | " ");
keyword = valid character, {valid character};
OpenSearch Description Document
The OpenSearch Description Document is an XML file is found inside of the OpenSearch Endpoint bundle and is named ddf-os.xml.
Implementation Details
Imported Services
| Registered Interface | Availability | Multiple |
|---|---|---|
|
required |
false |
|
required |
false |
Exported Services
Registered Interface |
Service Property |
Value |
|
Example Output
The default output for OpenSearch is Atom. Detailed documentation on Atom output, including query result mapping and example output, can be Atom Query Response Transformer found on the page.
Known Issues
None
Developing a New Endpoint
Complete the following procedure to create an endpoint.
-
Create a Java class that implements the endpoint’s business logic. Example: Creating a web service that external clients can invoke.
-
Add the endpoint’s business logic, invoking CatalogFramework calls as needed.
-
Import the DDF packages to the bundle’s manifest for run-time (in addition to any other required packages):
Import-Package: ddf.catalog, ddf.catalog.* -
Retrieve an instance of CatalogFramework from the OSGi registry. (Refer to the Working with OSGi - Service Registry section for examples.)
Deploy the packaged service to DDF. (Refer to the Working with OSGi - Bundles section.)
|
It is recommended to use the maven bundle plugin to create the Endpoint bundle’s manifest as opposed to directly editing the manifest file. |
|
No implementation of an interface is required |
Common Endpoint Business Logic
| Methods | Use |
|---|---|
Ingest |
Add, modify, and remove metadata using the ingest-related CatalogFramework methods: |
Query |
Request metadata using the |
Source |
Get available |
Resource |
Retrieve products referenced in Metacards from Sources. |
Transform |
Convert common Catalog Framework data types to and from other data formats. |
DDF Data Migration
Data migration is the process of moving metadata from one catalog provider to another. It is also the process of translating metadata from one format to another. Data migration is necessary when a user decides to use metadata from one catalog provider in another catalog provider. The following steps define the procedure for transferring metadata from one catalog provider to another catalog provider. In addition, the procedures define the steps for converting metadata to different data formats.
Set Up
Set up DDF as instructed in Starting DDF section.
Move Metadata from One Catalog Provider to Another
Export Metadata Out of Catalog Provider
-
Configure a desired catalog provider.
-
From the command line of DDF console, use the command to export all metadata from the catalog provider into serialized data files dump. The following example shows a command for running on Linux and a command for running on Windows.
dump "/myDirectory/exportFolder" or dump "C:/myDirectory/exportFolder"
Ingest Exported Metadata into Catalog Provider
-
Configure a different catalog provider.
-
From the command line of DDF console, use the ingest command to import exported metadata from serialized data files into catalog provider. The following example shows a command for running on Linux and a command for running on Windows.
ingest -p "/myDirectory/exportFolder" or ingest -p "C:/myDirectory/exportFolder"
Translate Metadata from One Format to Another
Metadata can be converted from one data format to another format. Only the data format changes, but the content of the metadata does not, as long as option -p is used with the ingest command. The process for converting metadata is performed by ingesting a data file into a catalog provider in one format and dumping it out into a file in another format. Additional information for ingest and dump commands can be at Catalog Commands.
Integrating Catalog Framework
Catalog Framework
The Catalog Framework wires all Catalog components together. It is responsible for routing Catalog requests and responses to the appropriate target. Endpoints send Catalog requests to the Catalog Framework. The Catalog Framework then invokes Catalog Plugins, Transformers, and Resource Components as needed before sending requests to the intended destination, such as one or more Sources.
Example Catalog Frameworks
The Catalog comes with the following Catalog Frameworks out of the box:
-
Catalog Framework
-
Catalog Fanout Framework
Catalog Framework Sequence Diagrams
Because the Catalog Framework plays a central role to Catalog functionality, it interacts with many different Catalog components. To illustrate these relationships, high level sequence diagrams with notional class names are provided below. These examples are for illustrative purposes only and do not necessarily represent every step in each procedure.
Ingest
The Ingest Service Endpoint, the Catalog Framework, and the Catalog Provider are key components of the Reference Implementation. The Endpoint bundle implements a Web service that allows clients to create, update, and delete metacards. The Endpoint calls the CatalogFramework to execute the operations of its specification. The CatalogFramework routes the request through optional PreIngest and PostIngest Catalog Plugins, which may modify the ingest request/response before/after the Catalog Provider executes the ingest request and provides the response. Note that a CatalogProvider must be present for any ingest requests to be successfully processed, otherwise a fault is returned.
This process is similar for updating catalog entries, with update requests calling the update(UpdateRequest) methods on the Endpoint, CatalogFramework, and Catalog Provider. Similarly, for deletion of catalog entries, the delete requests call the delete(DeleteRequest) methods on the Endpoint, CatalogFramework, and Catalog Provider.
Error Handling
Any ingest attempts that fail inside the Catalog Framework (whether the failure comes from the Catalog Framework itself, pre-ingest plugin failures, or issues with the Catalog Provider) will be logged to a separate log file for ease of error handling. The file is located at data/log/ingest_error.log and will log the Metacards that fail, their ID and Title name, and the stack trace associated with their failure. By default, successful ingest attempts are not logged. However, that functionality can be achieved by setting the log level of the ingestLogger to DEBUG (note that enabling DEBUG can cause a non-trivial performance hit).
|
To turn off logging failed ingest attempts into a separate file, execute the following via the command line console log:set ERROR ingestLogger |
Query
The Query Service Endpoint, the Catalog Framework, and the CatalogProvider are key components for processing a query request as well. The Endpoint bundle contains a Web service that exposes the interface to query for Metacards. The Endpoint calls the CatalogFramework to execute the operations of its specification. The CatalogFramework relies on the CatalogProvider to execute the actual query. Optional PreQuery and PostQuery Catalog Plugins may be invoked by the CatalogFramework to modify the query request/response prior to the Catalog Provider processing the query request and providing the query response. If a CatalogProvider is not configured and no other remote Sources are configured, a fault will be returned. It is possible to have only remote Sources configured and no local CatalogProvider configured and be able to execute queries to specific remote Sources by specifying the site name(s) in the query request.
Product Retrieval
The Query Service Endpoint, the Catalog Framework, and the CatalogProvider are key components for processing a retrieve product request. The Endpoint bundle contains a Web service that exposes the interface to retrieve products, also referred to as Resources. The Endpoint calls the CatalogFramework to execute the operations of its specification. The CatalogFramework relies on the Sources to execute the actual product retrieval. Optional PreResource and PostResource Catalog Plugins may be invoked by the CatalogFramework to modify the product retrieval request/response prior to the Catalog Provider processing the request and providing the response. It is possible to retrieve products from specific remote Sources by specifying the site name(s) in the request.
Product Caching
The Catalog Framework optionally provides caching of products, so future requests to retrieve the same product will be serviced much quicker. If caching is enabled, each time a retrieve product request is received, the Catalog Framework will look in its cache (default location <INSTALL_DIR>/data/product-cache) to see if the product has been cached locally. If it has, the product is retrieved from the local site and returned to the client, providing a much quicker turnaround because remote product retrieval and network traffic was avoided. If the requested product is not in the cache, the product is retrieved from the Source (local or remote) and cached locally while returning the product to the client. The caching to a local file of the product and the streaming of the product to the client are done simultaneously so that the client does not have to wait for the caching to complete before receiving the product. If errors are detected during the caching, caching of the product will be abandoned, and the product will be returned to the client.
The Catalog Framework attempts to detect any network problems during the product retrieval, e.g., long pauses where no bytes are read implying a network connection was dropped. (The amount of time that a "long pause" is defined as is configurable, with the default value being five seconds.) The Catalog Framework will attempt to retrieve the product up to a configurable number of times (default = three), waiting for a configurable amount of time (default = 10 seconds) between each attempt, trying to successfully retrieve the product. If the Catalog Framework is unable to retrieve the product, an error message is returned to the client.
If the admin has enabled the Always Cache When Canceled option, caching of the product will occur even if the client cancels the product retrieval so that future requests will be serviced quickly. Otherwise, caching is canceled if the user cancels the product download.
Product Download Status
As part of the caching of products, the Catalog Framework also posts events to the OSGi notification framework. Information includes when the product download started, whether the download is retrying or failed (after the number of retrieval attempts configured for product caching has been exhausted), and when the download completes. These events are retrieved by the Search UI and presented to the user who initiated the download.
DDF Catalog Schematron
The Schematron Validation Plugin (plugin-schematron-validation bundle) provides a pre-ingest interceptor that validates the incoming request against a Schematron ruleset (or rule sets). If the request has warnings or errors based on the Schematron validation, the request is marked as invalid and a SOAP fault is returned with details on the exact reason why the request was invalid. This bundle has the following characteristics:
-
It provides the Schematron engine, meaning it provides the infrastructure to load, parse, and apply Schematron rule sets.
-
It does not contain any Schematron ruleset(s) - those must be installed (as features) separately.
The Schematron validation bundle works with Schematron rule set bundles to obtain the rules for validation. The Schematron validation bundle and the Schematron rule set bundle are uninstalled by default. More information about Schematron in general can be found at http://www.schematron.com.
Understanding Schematron
Schematron is a language for making assertions about the presence or absence of patterns in XML documents. It is not a replacement for XML Schema (XSD) validation. Rather, it is used in conjunction with many grammar-based structure-validation languages, such as XSD.
Schematron is an ISO standard: ISO/IEC 19757-3:2006 Information technology — Document Schema Definition Language (DSDL) — Part 3: Rule-based validation — Schematron
Schematron assertions are based on two simple actions:
-
First, find context nodes in the document (typically an element) based on XPath criteria.
-
Then, check to see if some other XPath expressions are true, for each of the nodes returned in the first step.
Schematron assertions (or rules) are defined in a .sch file by convention, which is an XML file conforming to Schematron’s rules for defining assertions. This file is referred to as a "Schematron ruleset." These rules are contained in one .sch file or a hierarchy of .sch files. However, there is ultimately one .sch file that includes or uses all of the other .sch files. This one .sch file is the "ruleset" used by the DDF Schematron Validation Service.
Schematron also includes SVRL (Schematron Validation Report Language) report generation, which is in XML format. This report includes the results of all of the Schematron rulesets' assertions, classifying them as warnings or errors (based on the ruleset).
DDF implements Schematron as a Pre-Ingest Plugin, running the Schematron ruleset(s) against each catalog entry in each create and update ingest request that DDF receives. The DDF Schematron Validation Pre-Ingest Plugin consists of two components: the Schematron "engine" and the client ruleset bundle(s). Each are described below.
Schematron Validation Plugin
The Schematron Validation Service is in a single OSGi bundle named plugin-schematron-validation. This bundle includes all of the code to implement:
-
Loading and pre-compilation of the client ruleset bundle
-
Executing the ruleset against ingest requests
-
Generating the SVRL report. From this report, the Schematron Validation Service determines if errors and/or warnings were detected during validation. If errors or warnings exist, validation fails and the ingest request is rejected. A SOAP fault is then returned to the client, including details on why the request is invalid.
The client’s ruleset bundle determines what rules generate warnings and what rules generate errors. The Schematron Validation Service provides a configuration option (accessible via the Web Console’s Configuration page) to suppress warnings. When this option is set, if only warnings are detected during Schematron validation, then the request is considered valid. By default, this suppress warnings option is unset (hence warnings result in invalid requests by default).
Validation is executed per catalog entry in the ingest request. Note that if multiple catalog entries are in the request, Schematron validation stops once a catalog entry is determined to be invalid. For example, if ten catalog entries are in a singe create ingest request and entry #4 is invalid, entries 5 through 10 will not even be validated. Schematron returns an invalid status after entry #4 is validated.
|
If only the Schematron Validation Service is installed, no Schematron validation occurs. This is because the Schematron Validation Service has no ruleset to validate the request against; it only provides the framework for Schematron rulesets to be applied to ingest requests. At least one client ruleset bundle must also be installed. |
Schematron Client "Ruleset" Bundle(s)
A client must deploy at least one Schematron ruleset bundle before Schematron validation occurs.
The Schematron ruleset bundle consists of three required items:
-
The
.schruleset file defining the Schematron applied rules -
A bundle wiring specification file (e.g., Blueprint, Spring DM, Declarative Services, etc.) specifying the
.schfile used and associating theruleset to the Schematron Validation Service -
An OSGi metatype XML file that specifies the configurable options for the Schematron Validation Service (namely the suppress warnings option)
The diagram below illustrates how these Schematron components interact:
Installing and Uninstalling
The Schematron Validation ddf.catalog.source.solr.Library can be installed and uninstalled using the normal processes described in the Configuring DDF section.
Configuring
There are no configuration options for this application.
Overview
The DDF Content application provides a framework for storing, reading, processing, transforming and cataloging data. This guide provides instructions for configuring, maintaining and operation components of the Distributed Data Framework.
DDF Content Framework
The DDF Content Framework is a framework for storing, reading, processing, and transforming content information. Content information is consists of files that the client wants parsed and created with a metacard that is subsequently used to create a catalog entry in the Metadata Catalog and stored in the DDF Content Repository.
The files passed into the DDF Content Framework can be of any type, e.g., NITF, PDF, Microsoft Word, etc., as long as their mime type can be resolved, an Input Transformer exists to parse their content into a metacard, and the generated metacard satisfies the constraints of the catalog provider into which the generated metacard will be inserted. For example, if the Tika Input Transformer is installed, Microsoft Office documents and PDF files can be transformed into metacards. If the Solr catalog provider is being used, the generated metacard can be successfully inserted.
Clients typically perform create, read, update, and delete (CRUD) operations against the content repository. At the core of the Content functionality is the Content Framework, which routes all requests and responses through the system, invoking additional processing per the system configuration.
The DDF Content Framework has several components and an API that connects them together. The Content API consists of the Java interfaces that define DDF functionality. These interfaces provide the ability for components to interact without a dependency on a particular underlying implementation; therefore, allowing the possibility of alternate implementations that can maintain interoperability and share developed components. As such, new capabilities can be developed independently in a modular fashion, using the Content API interfaces, and reused by other DDF installations.
The DDF Content API will evolve with DDF itself, but great care is taken to retain backwards compatibility with developed components. Compatibility is reflected in version numbers. For more information, see the Software Versioning section in the Appendix.
Content Framework Architecture
Architecture
Design
The DDF Content Framework design consists of several major components, namely endpoints, input transformers, the core Content Framework, storage providers, and Content plugins.
The endpoints provide external clients access to the Content Framework. Input transformers convert incoming content into a metacard. The core Content Framework routes requests and responses through the system. The storage providers provide storage of the content to specific types of storage, e.g., file system, relational database, XML database. The Content plugins provide pluggable functionality that can be executed after the content has been stored/update/deleted but before the response has been returned to the client.
DDF Content API Packaging
The Content API consists of the Java interfaces that define the methods that this API supports. The following illustrations show these interfaces and how they are packaged.
DDF Content API Reference Implementation
The Content API Reference Implementation consists of the Java classes that implement the methods defined in the Content API. The following illustrations show these classes and how they are packaged.
Content Component Types
Content Data Components
Content Item
Content Item is the domain object, which is populated by the Content Endpoint from the client request, that represents the information about the content to be stored in the Storage Provider. A Content Item encapsulates the content’s globally unique ID, mime type, and input stream (i.e., the actual content).
Content Endpoints
Content endpoints act as a proxy between the client and the Content Framework. Endpoints expose the client to the Content Framework.
Endpoint interface formats/protocols can include a variety of formats, including (but not limited to):
-
SOAP Web services
-
RESTful services
-
JMS
-
RMI
-
JSON
-
OpenSearch
Content endpoints provide the capability to create, read, update, and delete content in the content repository, as well as create, update, and delete metacards corresponding to the content in the Metadata Catalog.
Endpoints are the only client-accessible components in DDF.
Examples
The following endpoints are provided with the Content Framework out of the box:
-
Content REST CRUD Endpoint
Content Framework
The Content Framework wires all Content components together via OSGi and the Content API. It handles all Content operations requested by endpoints, invoking Content Plugins as needed, and for most Operations, sending the request to a Storage Provider for execution.
Examples
The DDF Content comes with the following Content Frameworks out of the box:
-
Standard Content Framework
Content Operations
The DDF Content provides the capability to read, create, update, and delete content from the DDF Content Repository.
Each of these operations follow a request/response paradigm. The request is the input to the operation and contains all of the input parameters needed by the Content Framework’s operation to communicate with the Storage Providers and Content Plugins. The response is the output from the execution of the operation that is returned to the client and contains all of the data returned by the Storage Providers and Content Plugins. For each operation, there is an associated request/response pair, e.g., the CreateRequest and CreateResponse pair for the Content Framework’s create operation.
All of the request and response objects are extensible in that they can contain additional key/value properties on each request/response. This allows additional capability to be added without changing the Content API, helping to maintain backwards compatibility. Refer to the Developer’s Guide for details on using this extensibility.
Content Plugins
The Content Framework calls Content plugins to process requests after they have been processed by the Storage Provider. If the request does not specify content storage (only processing), the Content Plugins are called immediately by the Content Framework.
Examples
Types of Content Plugins available out of the box:
-
Content Cataloger Plugin
Storage Providers
Storage providers act as a proxy between the Content Framework and the mechanism storing the content, e.g., file system, relational database. Storage providers expose the storage mechanism to the Content Framework.
Storage providers provide the capability to the Content Framework to create, read, update, and delete content in the content repository.
Examples
The following storage providers are provided with the Content Framework out of the box:
-
File System Storage Provider
DDF Content Core
The content-core bundle is a collection of default catalog components that can be used for most situations.
Standard Content Framework
The Standard Content Framework provides the reference implementation of a Content Framework that implements all requirements of the Content API. ContentFrameworkImpl is the implementation of the Standard Content Framework.
Using
The Standard Content Framework is the core class of DDF Content. It provides the methods for read, create, update, and delete (CRUD) operations on the Storage Provider.
Use this framework if:
-
access to a storage provider to create, update, and delete content items in the DDF Content Repository is required or
-
the ability to parse content, create a metacard, and then create, update, and delete catalog entries in the Metadata Catalog based on the parsed content are required.
Installing and Uninstalling
The Standard Content Framework is bundled in the content-core feature and is part of the content-core-app. It can be installed and uninstalled using the normal processes described in the Configuration section.
Configuring
There are no configuration properties for this component. This component can only be installed and uninstalled.
Known Issues
None
Content Cataloger Plugin
The Content Cataloger Plugin provides the implementation to parse content, create a Metacard, and create, update, and delete catalog entries in the Metadata Catalog.
The Content Cataloger Plugin is an implementation of the ContentPlugin interface. When installed, it is invoked by the Content Framework after a content item has been processed by the Storage Provider, but before the response is returned to the Content Endpoint.
The Content Cataloger Plugin searches the OSGi service registry for all services registered as inputTransformers that can process the content item’s mime type. If such a service is found, the service is invoked (for create and update operations; delete operations are handled internally by the Content Cataloger Plugin). The inputTransformer service accepts the content item’s InputStream and parses it, creating a Metacard that is returned to the Content Cataloger Plugin. This Metacard is then used in the create and update operations invoked on the Catalog Framework to interface with the Metadata Catalog.
Details on how to develop an Input Transformer with either Java or Apache Camel can be found in the Developing an Input Transformer section of Extending Catalog Transformers.
Using
Use the Content Cataloger Plugin if create/update/delete of catalog entries in the Metadata Catalog based on the content item are desired. These CUD operations on the Metadata Catalog are made possible by parsing the content item to create a metacard and then using this metacard in the CUD operations on the Catalog Framework. The Content Cataloger Plugin is the only component in the DDF Content Framework that has the ability to interface with the Catalog Framework (and hence the Metadata Catalog).
Installing and Uninstalling
The Content Cataloger Plugin is bundled as the content-core-catalogerplugin feature and can be installed and uninstalled using the normal processes described in the Configuration
section of the Administrator’s Guide.
Configuring
There are no configurable properties for this component. This component can only be installed and uninstalled.
Known Issues
Content Cataloger Plugin is only partially transactional. On create operations where the content is being stored in the content repository and the content is being parsed to generated a metacard for insertion into the Metadata Catalog, the content storage will be undone (i.e., the recently inserted content removed from the content repository if the Metadata Catalog insertion encountered problems.) Update and delete operations have no transactional capabilities. Once the content is updated or deleted this cannot be undone. Therefore, the content repository and Metadata Catalog could get out of sync.
Directory Monitor
The Content Directory Monitor allows files placed in a monitored directory to be ingested into the DDF Content Repository and/or the Metadata Catalog (MDC). A monitored directory is a directory configured to be polled by DDF periodically (typically every one second) for any new files added to the directory that should be ingested into the Content Framework.
-
The typical execution flow of the Directory Monitor is:
-
A new file is detected in the monitored directory,
-
The file’s contents are passed on to the Content Framework and processed based on whether the monitored directory’s processing directive was:
-
configured to just store the file in the DDF Content Repository,
-
configured to just process the file’s metadata and ingest it into the MDC, or
-
configured to both store the file in the Content Repository and ingest it into the MDC.
-
-
If the response from the Content Framework is successful, indicating the content was stored and/or processed, the file in the monitored directory is either deleted (default behavior) or copied to a sub-directory called
.ingested(see below for how to configure this behavior). If the response from the Content Framework was unsuccessful or a failure occurred, the file is moved from the monitored directory to a sub-folder named.errors, allowing easy identification of the ingested files that had problems.
Multiple monitored directories can be configured, each monitoring different directories.
Using
The Content Directory Monitor provides the capability to easily create content in the DDF Content Repository and metacards in the MDC by simply placing a file in a directory that has been configured to be monitored by DDF. For example, this would be useful for copying files from a hard drive (or directory) in a batch-like operation to the monitored directory and having all of the files processed by the Content Framework.
Sample Usage Scenarios
Scenario 1: Monitor single directory for storage and processing, with no file backup
-
The Content Directory Monitor has the following configurations.
-
The relative path of
inboxfor the directory path. -
The Processing Directive is set to Store and Process.
-
The Copy Ingested Files option is not checked.
-
-
As files are placed in the monitored directory
<DDF_INSTALL_DIR>/inbox, the files are ingested into the Content Framework.-
The Content Framework generates a GUID for the create request for this ingested file.
-
Since the Store and Process directive was configured the ingested file is passed on to the Content File System Storage Provider, which creates a sub-directory in the Content Repository using the GUID and places the ingested file into this GUID sub-directory using the file name provided in the request.
-
The Content Framework then invokes the Catalog Content Plugin, which looks up the Input Transformer associated with the ingested file’s mime type and invokes the Catalog Framework, which inserts the metacard into the MDC. This Input Transformer creates a metacard based on the contents of the ingested file.
-
The Content Framework sends back a successful status to the Camel route that was monitoring the directory.
-
Camel route completes and deletes the file from the monitored directory.
-
Scenario 2: Monitor single directory for storage with file backup
-
The Content Directory Monitor has the following configurations.
-
The absolute path of
/usr/my/home/dir/inboxfor the directory path. -
The Processing Directive is set to store only.
-
The Copy Ingested Files option is checked.
-
-
As files are placed in the monitored directory
/usr/my/home/dir/inbox, the files are ingested into the Content Framework.-
The Content Framework generates a GUID for the create request for this ingested file.
-
Since the Store directive was configured, the ingested file is passed on to the Content File System Storage Provider, which creates a sub-directory in the Content Repository using the GUID and places the ingested file into this GUID sub-directory using the file name provided in the request.
-
The Content Framework sends back a successful status to the Camel route that was monitoring the directory.
-
The Camel route completes and moves the file from the monitored directory to its sub-directory
/usr/my/home/dir/inbox/.ingested.
-
Scenario 3: Monitor multiple directories for processing only with file backup - errors encountered on some ingests
-
Two different Content Directory Monitors have the following configurations.
-
The relative path of
inboxandinbox2for the directory path. -
The Processing Directive on both directory monitors is set to Process.
-
The Copy Ingested Files option is checked for both directory monitors.
-
-
As files are placed in the monitored directory
<DDF_INSTALL_DIR>/inbox, the files are ingested into the Content Framework.-
The Content Framework generates a GUID for the create request for this ingested file.
-
Since the Process directive was configured, the ingested file is passed on to the Catalog Content Plugin, which looks up the Input Transformer associated with the ingested file’s mime type (but no Input Transformer is found) and an exception is thrown.
-
The Content Framework sends back a failure status to the Camel route that was monitoring the directory.
-
The Camel route completes and moves the file from the monitored directory to the
.errorssub-directory.
-
-
As files are placed in the monitored directory
<DDF_INSTALL_DIR>/inbox2, the files are ingested into the Content Framework.-
The Content Framework generates a GUID for the create request for this ingested file.
-
The Content Framework then invokes the Catalog Content Plugin, which looks up the Input Transformer associated with the ingested file’s mime type and invokes the Catalog Framework, which inserts the metacard into the MDC. This Input Transformer creates a metacard based on the contents of the ingested file.
-
The Content Framework sends back a successful status to the Camel route that was monitoring the directory.
-
The Camel route completes and moves the file from the monitored directory to its
.ingestedsub-directory.
-
Installing and Uninstalling
The Content Directory Monitor is packaged as the content-core-directorymonitor feature and is part of the content-core-app. It is installed by default.
It can be installed and uninstalled using the normal processes described in the Configuration section.
|
Note that the |
Configuring
This component can be configured using the normal processes described in the Configuration section.
The configurable properties for the Content Directory Monitor are accessed from the Content Directory Monitor Configuration in the Web Console.
Configuring Content Directory Monitors
Managed Service Factory PID:
ddf.content.core.directorymonitor.ContentDirectoryMonitor
Configurable Properties
| Title | Property | Type | Description | Default Value | Required |
|---|---|---|---|---|---|
Directory Path |
monitoredDirectoryPath |
String |
Specifies the directory to be monitored. Can be a fully-qualified directory or a relative path (which is relative to the DDF installation directory). |
N/A |
Yes |
Processing Directive |
directive |
String |
One of three possible values from a drop down box:
|
Store and Process |
Yes |
Copy Files to Backup Directory |
copyIngestedFiles |
Boolean |
Checking this option indicates that a backup of the file placed in the monitored directory should be made upon successful processing of the file. The file is moved into the |
False |
No |
Known Issues
None
File System Storage Provider
The File System Storage Provider is used to create/update/delete content items as files in the DDF Content Repository. The File System Storage Provider is an implementation of the Storage Provider interface. When installed, it is invoked by the Content Framework to create, update,or delete a file in the DDF Content Repository.
-
For
createoperations, the File System Storage Provider (using theMimeTypeMapper) examines the mime type of the content item and determines the extension to use for the file to be stored. The File System Storage Provider also auto-generates a Globally Unique ID (GUID) for the content item. This GUID is used as the sub-directory for the content item’s location in the Content Repository. This is to insure the files in the Content Repository are more evenly distributed rather than all being stored in one monolithic directory. The content is stored using the file name specified in the create request.
As an example, if the content item’s mime type was image/nitf, then:
-
the file extension would be
.nitf, -
a GUID would be auto-generated (an example GUID would be
54947df8-0e9e-4471-a2f9-9af509fb5889), -
the file name is specified in the
createrequest (example: myfile.nitf), and -
the location in the Content Repository would be determined based on the GUID and the file name specified in the request (example: 54947df80e9e4471a2f99af509fb5889/myfile.nitf).
-
For
readoperations, the File System Storage Provider reads the content file with the GUID specified in the ReadRequest. -
For
updateoperations, the File System Storage Provider updates the content file with the content item’s new InputStream contents. The GUID of the content file to be updated is included in the UpdateRequest. -
For
deleteoperations, the File System Storage Provider deletes the content file with the GUID specified in the DeleteRequest.
-
|
A sub-directory is created for each entry in the content store, so there will be limitations based on the file system that is used, i.e., the maximum amount of sub-directories supported for a file system. |
Using
Use the File System Storage Provider if creating, reading, updating, and/or deleting contents in a file system is desired.
Installing and Uninstalling
The File System Storage Provider is packaged as the content-core-filesystemstorageprovider feature and can be installed and uninstalled using the normal processes described in the Configuration section. This feature is installed by default.
Configuring
The location used for content storage can be configured in the webconsole under Configuration → Content File System Storage Provider.
Known Issues
None.
DDF Content REST CRUD Endpoint
The Content REST endpoint provides a CDR REST Retrieve v2.0-compliant DDF endpoint that allows clients to perform CRUD operations on the Content Repository using REST, a simple architectural style that performs communication using HTTP.
The URL exposing the REST functionality will be located at http://<DDF_HOST>:<DDF_PORT>/services/content, where DDF_HOST is the IP address of where DDF is installed and DDF_PORT is the port number on which DDF is listening.
The Content REST CRUD endpoint provides the capability to read, create, update, and delete content in the Content Repository, as well as create, update, and delete metacards in the catalog provider, i.e., the Metadata Catalog (MDC). Furthermore, this endpoint allows the client to perform the create/update/delete operations on just the Content Repository, just the MDC, or both in one operation.
|
The Content Framework is currently transactional for create operations only. Therefore, the client sends a create request to create content in the DDF Content Repository, processes the content to create a metacard, and ingests it into the MDC (i.e.,directive=STORE_AND_PROCESS), If a problem is encountered during the catalog ingest, the content is removed from the DDF Content Repository, analogous to a rollback. This is so that the DDF Content Repository and the MDC are kept in sync. The Content Framework does not support rollback capability for update or delete operations that affect both the DDF Content Repository and the MDC. |
Using
The Content REST CRUD endpoint provides the capability to read, create, update, and delete content in the DDF Content Repository as well as create, update, and delete metacards in the catalog provider as follows. Sample requests and repsonses are provided in a separate table.
Operation |
HTTP Request |
Details |
Example URL |
Create Content and Catalog Entry |
HTTP POST |
The multipart/form-data REST request that contains the binary data to be stored in the DDF Content Repository and to be parsed to create a metacard for ingest into the MDC. This binary data can be included in the request’s body or as a file attachment. An HTTP 201 CREATED status code is returned to the client with:
|
Where the |
Create Content Only |
HTTP POST |
The multipart/form-data REST request that contains the binary data to be stored in the DDF Content Repository. This binary data can be included in the request’s body or as a file attachment. An HTTP 201 CREATED status code is returned to the client with:
|
http://<DDF_HOST>:<DDF_PORT>/services/content Where the |
Create Catalog Entry Only |
HTTP POST |
The multipart/form-data REST request that contains the binary data to be parsed to create a metacard for ingest into the MDC. This binary data can be included in the request’s body or as a file attachment. An HTTP 200 OK status code is returned to the client with:
|
Where the |
Update Content and Catalog Entry |
HTTP PUT |
The ID of the content item in the DDF Content Repository to be updated is appended to the end of the URL. The body of the REST request contains the binary data to update the DDF Content Repository. An HTTP 200 OK status code is returned to the client with:
|
Where |
Update Content Only |
HTTP PUT |
The ID of the content item in the DDF Content Repository to be updated is appended to the end of the URL. The body of the REST request contains the data to update the DDF Content Repository. An HTTP 200 OK status code is returned to the client with:
|
http://<DDF_HOST>:<DDF_PORT>/services/content/ABC123 Where |
Update Catalog Entry Only and Content ID is provided |
HTTP PUT |
The ID of the content item in the DDF Content Repository to be updated is appended to the end of the URL. The body of the REST request contains the data to update the catalog entry in the MDC. An HTTP 200 OK status code is returned to the client with:
|
Where |
Update Catalog Entry Only and Content URI is provided |
HTTP PUT |
The URI of the content item in the MDC to be updated is specified in the The body of the REST request contains the data to update the catalog entry in the MDC. An HTTP 200 OK status code is returned to the client with: *Catalog-ID HTTP header set to the catalog ID that was updated in the MDC |
The |
Delete Content and Catalog Entry |
HTTP DELETE |
The ID of the content item in the DDF Content Repository to be deleted is appended to the end of the URL. HTTP status code of 204 NO CONTENT is returned upon successful deletion.
|
Where |
Delete Content Only |
HTTP DELETE |
The ID of the content item in the DDF Content Repository to be deleted is appended to the end of the URL. HTTP status code of 204 NO CONTENT is returned upon successful deletion. |
Where |
Delete Catalog Entry Only |
HTTP DELETE |
The URI of the content item in the MDC to be deleted is specified in the HTTP status code of 204 NO CONTENT is returned to the client upon successful deletion with:
|
The |
Read |
HTTP GET |
The ID of the content item in the DDF Content Repository to be retrieved is appended to the end of the URL. An HTTP 200 OK status code is returned upon successful read, and the contents of the retrieved content item are contained in the HTTP body. |
Where |
|
For all Content REST CRUD commands, only one content item ID is supported in the URL; i.e., bulk operations are not supported. |
Interact with REST Endpoint
Any web browser can be used to perform a REST read. Various other tools and libraries can be used to perform the other HTTP operations on the REST endpoint (e.g., soapUI, cURL, etc.).
Create Request Multipart/Form-Data Parameters
The create (HTTP POST) request is a multipart/form-data request, allowing the binary data (i.e., the content) to be either included in the request’s body or attached as a file. This binary data is defined in a Content-Disposition part of the request where the name parameter is set to file, and the optional filename parameter indicates the name of the file that the content should be stored as.
Optional form parameters for the create request are the directive and contentUri. The directive form parameter’s value can be either`STORE`, PROCESS, or STORE_AND_PROCESS, indicating if the content should be only stored in the Content Repository, only processed to generate a metacard and then ingested into the MDC, or both. The directive form parameter will default to STORE_AND_PROCESS if it is not
specified.
The contentUri form parameter allows the client to specify the URI of a product stored remotely/externally (relative to DDF). This contentUri is used to indicate that the client will manage the content storage but wants the Content Framework to parse the content and create/update/delete a catalog entry in the MDC using this content URI as the entry’s product URI. This parameter is used when the directive is set to PROCESS.
Update and Delete Request HTTP Header Parameters
Two optional HTTP header parameters are available on the update and delete RESTful URLs.
The directive header parameter allows the client to optionally direct the Content Framework to:
-
only store the content in the DDF Content Repository (
directive=STORE) -
store the content in the repository and parse the content to create a metacard (
directive=STORE_AND_PROCESS); this metacard is then created/updated/deleted in the Metadata Catalog (by invoking the Catalog Framework operations)
STORE_AND_PROCESS is the default value for the directive header parameter. The directive header parameter is only used on the PUT and DELETE RESTful URLs that have a contentId in the URL.
The contentUri header parameter allows the client to specify the URI of a product stored remotely/externally (relative to DDF). The contentUri header parameter is only used with the PUT and DELETE RESTful URLs, where the contentId is not appended to the URL.
Sample Requests and Responses
The table below displays sample REST requests and their responses for each of the operations supported by the Content REST endpoint.
For the examples below, DDF was running on host DDF_HOST on port DDF_PORT. Also, for all examples below the binary data, i.e., the "content", is not included in the request’s body.
| Operation | Request | Response |
|---|---|---|
Create Content and Catalog Entry |
POST http://DDF_HOST:DDF_PORT/services/content/ HTTP/1.1 Content-Type: multipart/form-data; boundary=ARCFormBoundaryuxprlpjxmakbj4i --ARCFormBoundaryuxprlpjxmakbj4i Content-Disposition: form-data; name="directive" STORE_AND_PROCESS --ARCFormBoundaryuxprlpjxmakbj4i Content-Disposition: form-data; name="file"; filename="C:\DDF\geojson_valid.json" Content-Type: application/json;id=geojson <content included in payload but omitted here for brevity> --ARCFormBoundaryuxprlpjxmakbj4i-- |
HTTP/1.1 201 Created Catalog-ID: e82a31253e634a409c83d7164638f029 Content-ID: ef0ef614bbdb4ede99e2371ebd2280ee Content-Length: 0 Content-URI: content:ef0ef614bbdb4ede99e2371ebd2280ee Date: Wed, 13 Feb 2013 21:56:15 GMT Location: http://127.0.0.1:8181/services/content/ef0ef614bbdb4ede99e2371ebd2280ee Server: Jetty(7.5.4.v20111024) |
Create Content Only |
POST http://DDF_HOST:DDF_PORT/services/content/ HTTP/1.1 Content-Type: multipart/form-data; boundary=ARCFormBoundaryuxprlpjxmakbj4i --ARCFormBoundaryuxprlpjxmakbj4i Content-Disposition: form-data; name="directive" STORE --ARCFormBoundaryuxprlpjxmakbj4i Content-Disposition: form-data; name="file"; filename="C:\DDF\geojson_valid.json" Content-Type: application/json;id=geojson <content included in payload but omitted here for brevity> --ARCFormBoundaryuxprlpjxmakbj4i-- |
HTTP/1.1 201 Created Content-ID: 7d671cd8e9aa4637960b37c7b3870aed Content-Length: 0 Content-URI: content:7d671cd8e9aa4637960b37c7b3870aed Date: Wed, 13 Feb 2013 21:56:16 GMT Location: http://127.0.0.1:8181/services/content/7d671cd8e9aa4637960b37c7b3870aed Server: Jetty(7.5.4.v20111024) |
Create Catalog Entry Only |
POST http://DDF_HOST:DDF_PORT/services/content/ HTTP/1.1 Content-Type: multipart/form-data; boundary=ARCFormBoundaryuxprlpjxmakbj4i --ARCFormBoundaryuxprlpjxmakbj4i Content-Disposition: form-data; name="directive" PROCESS --ARCFormBoundaryuxprlpjxmakbj4i Content-Disposition: form-data; name="contentUri" http://localhost:8080/some/path/file.json --ARCFormBoundaryuxprlpjxmakbj4i Content-Disposition: form-data; name="file"; filename="C:\DDF\geojson_valid.json" Content-Type: application/json;id=geojson <content included in payload but omitted here for brevity> --ARCFormBoundaryuxprlpjxmakbj4i-- |
HTTP/1.1 200 OK Catalog-ID: 94d8fae228a84e29a7396196542e2608 Content-Length: 0 Date: Wed, 13 Feb 2013 21:56:16 GMT Server: Jetty(7.5.4.v20111024) |
Update Content and Catalog Entry |
PUT http://DDF_HOST:DDF_PORT/services/content/bf9763c2e74d46f68a9ed591c4b74591 HTTP/1.1 Accept-Encoding: gzip,deflate directive: STORE_AND_PROCESS Content-Type: application/json;id=geojson User-Agent: Jakarta Commons-HttpClient/3.1 Host: 127.0.0.1:8181 Content-Length: 9608 <content included in payload but omitted here for brevity> |
HTTP/1.1 200 OK Catalog-ID: d9ccbc9d139a4abbb0b1cdded1de0921 Content-ID: bf9763c2e74d46f68a9ed591c4b74591 Content-Length: 0 Date: Wed, 13 Feb 2013 21:56:25 GMT Server: Jetty(7.5.4.v20111024) |
Update Content Only |
PUT http://DDF_HOST:DDF_PORT/services/content/bf9763c2e74d46f68a9ed591c4b74591 HTTP/1.1 Accept-Encoding: gzip,deflate directive: STORE Content-Type: application/json;id=geojson User-Agent: Jakarta Commons-HttpClient/3.1 Host: 127.0.0.1:8181 Content-Length: 9608 <content included in payload but omitted here for brevity> |
HTTP/1.1 200 OK Content-ID: 7a702cd5c95347d2aa79ccc25b39e4f6 Content-Length: 0 Date: Wed, 13 Feb 2013 21:56:25 GMT Server: Jetty(7.5.4.v20111024) |
Update Catalog Entry Only and Content ID is provided (STORE_AND_PROCESS) |
PUT http://DDF_HOST:DDF_PORT/services/content/bf9763c2e74d46f68a9ed591c4b74591 HTTP/1.1 Accept-Encoding: gzip,deflate directive: STORE_AND_PROCESS Content-Type: application/json;id=geojson User-Agent: Jakarta Commons-HttpClient/3.1 Host: 127.0.0.1:8181 Content-Length: 9608 <content included in payload but omitted here for brevity> --- |
HTTP/1.1 200 OK Catalog-ID: 54a42215bf514322ba60bee97dab68e7 Content-ID: bf9763c2e74d46f68a9ed591c4b74591 Content-Length: 0 Date: Wed, 11 Sep 2013 15:22:59 GMT Server: Jetty(7.6.8.v20121106) |
Update Catalog Entry Only and Content URI is provided (PROCESS only) |
PUT http://DDF_HOST:DDF_PORT/services/content/ HTTP/1.1 Accept-Encoding: gzip,deflate contentUri: http://DDF_HOST:DDF_PORT/some/path4/file.json Content-Type: application/json;id=geojson <content included in payload but omitted here for brevity> |
HTTP/1.1 200 OK Catalog-ID: b7a95aab99cd4318b8021eeef2715e4b Content-Length: 0 Date: Wed, 11 Sep 2013 15:23:01 GMT Server: Jetty(7.6.8.v20121106) |
Delete Content and Catalog Entry |
DELETE http://DDF_HOST:DDF_PORT/services/content/911e27aba723448ea420142b0e793d38 HTTP/1.1 Accept-Encoding: gzip,deflate directive: STORE_AND_PROCESS User-Agent: Jakarta Commons-HttpClient/3.1 Host: 127.0.0.1:8181 |
HTTP/1.1 204 No Content Catalog-ID: 5236910acbd14d97a786f1fa95d43d58 Content-ID: 911e27aba723448ea420142b0e793d38 Content-Length: 0 Date: Wed, 13 Feb 2013 21:56:31 GMT Server: Jetty(7.5.4.v20111024) |
Delete Content Only |
DELETE http://DDF_HOST:DDF_PORT/services/content/eb91c8ee225d4cddb4d9fbe2d9bf5d7c HTTP/1.1 Accept-Encoding: gzip,deflate directive: STORE User-Agent: Jakarta Commons-HttpClient/3.1 Host: 127.0.0.1:8181 |
HTTP/1.1 204 No Content Content-ID: eb91c8ee225d4cddb4d9fbe2d9bf5d7c Content-Length: 0 Date: Wed, 13 Feb 2013 21:56:31 GMT Server: Jetty(7.5.4.v20111024) |
Delete Catalog Entry Only |
DELETE http://DDF_HOST:DDF_PORT/services/content/ HTTP/1.1 Accept-Encoding: gzip,deflate contentUri:http://DDF_HOST:DDF_PORT/some/path5/file.json User-Agent: Jakarta Commons-HttpClient/3.1 Host: 127.0.0.1:8181 |
HTTP/1.1 204 No Content Catalog-ID: c9a2b1c395f74300b33529483f095196 Content-Length: 0 Date: Wed, 13 Feb 2013 21:56:31 GMT Server: Jetty(7.5.4.v20111024) |
Read |
GET http://DDF_HOST:DDF_PORT/services/content/d34fd2b31f314aa6ade162015ba3016f HTTP/1.1 Accept-Encoding: gzip,deflate User-Agent: Jakarta Commons-HttpClient/3.1 Host: 127.0.0.1:8181 |
HTTP/1.1 200 OK Content-Length: 9579 Content-Type: application/json;id=geojson Date: Wed, 13 Feb 2013 21:56:24 GMT Server: Jetty(7.5.4.v20111024) ... (remaining data of content item retrieved omitted for brevity) ... |
cURL Commands
The table below illustrates sample cURL commands corresponding to a few of the above REST requests. Pay special attention to the flags, as they vary between operations.
For these examples, DDF was running on host DDF_HOST on port DDF_PORT. We ingested/updated a file named geojson_valid.json whose MIME type was application/json;id=geojson, and whose content ID ended up being CONTENT_ID.
To perform each operation without using the catalog, replace STORE_AND_PROCESS with STORE. To manipulate the catalog entry only, replace STORE_AND_PROCESS with PROCESS.
| Operation | Command |
|---|---|
Create Content and Catalog Entry |
curl -i -X POST -F "directive=STORE_AND_PROCESS" -F "filename=geojson_valid.json" -F "file=@geojson_valid.json;type=application/json;id=geojson" http://DDF_HOST:DDF_PORT/services/content/ |
Update Content and Catalog Entry |
curl -i -X PUT -H "directive: STORE_AND_PROCESS" -H "Content-Type: application/json;id=geojson" --data-binary "@geojson_valid.json" http://DDF_HOST:DDF_PORT/services/content/CONTENT_ID |
Delete Content and Catalog Entry |
curl -i -X DELETE -H "directive: STORE_AND_PROCESS" http:// DDF_HOST : DDF_PORT /services/content/ CONTENT_ID |
Read |
curl -i -X GET http://DDF_HOST:DDF_PORT/services/content/CONTENT_ID |
Install and Uninstall
The Content REST CRUD endpoint, packaged as the content-rest-endpoint feature, can be installed and uninstalled using the normal processes described in the Configuring DDF section.
Configuration
The Content REST CRUD endpoint has no configurable properties. It can only be installed or uninstalled.
Known Issues
None
Overview
This page supports integration of this application with external frameworks.
Platform Global Settings
The Platform Global Settings are the system-wide configuration settings used throughout DDF to specify the information about the machine hosting DDF.
Configuration
Configuration can be performed using the processes described in the Configuring DDF section. The configurable properties for the platform-wide configuration are accessed from Configuration → Platform Global Configuration in the Web Console.
Configurable Properties
| Title | Property | Type | Description | Default Value | Required |
|---|---|---|---|---|---|
Protocol |
protocol |
String |
Default protocol that should be used to connect to this machine. |
http |
yes |
Host |
host |
String |
The host name or IP address of the machine that DDF is running on. Do not enter localhost. |
yes |
|
Port |
port |
String |
The port that DDF is running on. |
yes |
|
Site Name |
id |
String |
The site name for this DDF instance. |
ddf.distribution |
yes |
Version |
version |
String |
The version of DDF that is running. This value should not be changed from the factory default. |
DDF 2.3.0 |
yes |
Organization |
organization |
String |
The organization responsible for this installation of DDF |
Codice Foundation |
yes |
Platform UI Settings
The Platform UI Settings are the system-wide configuration settings used throughout DDF to customize certain aspects of the DDF UI.
Configuration
Configuration can be performed using the processes described in the Configuring DDF section. The configurable properties for the platform-wide configuration are accessed from Configuration → Platform UI Configuration in the Web Console.
Configurable Properties
| Title | Property | Type | Description | Default Value | Required |
|---|---|---|---|---|---|
Enable System Usage Message |
systemUsageEnabled |
Boolean |
Turns on a system usage message, which is shown when the Search Application is opened |
yes |
|
System Usage Message Title |
systemUsageTitle |
String |
A title for the system usage Message when the application is opened |
yes |
|
System Usage Message |
systemUsageMessage |
String |
A system usage message to be displayed to the user each time the user opens the application |
yes |
|
Show System Usage Message once per session |
systemUsageOncePerSession |
Boolean |
With this selected, the system usage message will be shown once for each browser session. Uncheck this to have the usage message appear every time the search window is opened or refreshed. |
true |
yes |
Header |
header |
String |
Specifies the header text to be rendered on all pages. |
yes |
|
Footer |
footer |
String |
Specifies the footer text to be rendered on all pages. |
yes |
|
Text Color |
color |
String |
Specifies the Text Color of the Header and Footer. Use html css colors or #rrggbb. |
yes |
|
Background Color |
background |
String |
Specifies the Background Color of the Header and Footer. Use html css colors or #rrggbb. |
yes |
Landing Page
The DDF landing page offers a starting point and general information for a DDF node. It is accessible at /(index|home|landing(.htm|html)).
Configuration
Configuration can be performed using the processes described in the Configuring DDF section. The configurable properties for the landing page configuration are accessed from Platform → Landing Page in the Admin UI.
Configurable Properties
| Title | Property | Type | Description | Default Value | Required |
|---|---|---|---|---|---|
Description |
description |
String |
Specifies the description to display on the landing page. |
DDF is a free and open-source common data layer that abstracts services and business logic from the underlying data structures to enable rapid integration of new data sources. |
yes |
Phone Number |
phone |
String |
Specifies the phone number to display on the landing page. |
yes |
|
Email Address |
String |
Specifies the email address to display on the landing page. |
yes |
||
External Web Site |
externalUrl |
String |
Specifies the external web site URL to display on the landing page. |
yes |
|
Announcements |
announcements |
String |
Announcements that will be displayed on the landing page. Can be prefixed with a date of the form mm/dd/yy, leading zeroes not required. |
yes |
DDF Mime Framework
Mime Type Mapper
The MimeTypeMapper is the entry point in DDF for resolving file extensions to mime types, and vice versa.
MimeTypeMappers are used by the ResourceReader to determine the file extension for a given mime type in aid of retrieving a product. MimeTypeMappers are also used by the FileSystemProvider in the Content Framework to read a file from the content file repository.
The MimeTypeMapper maintains a list of all of the MimeTypeResolvers in DDF.
The MimeTypeMapper accesses each MimeTypeResolver according to its priority until the provided file extension is successfully mapped to its corresponding mime type. If no mapping is found for the file extension, null is returned for the mime type. Similarly, the MimeTypeMapper accesses each MimeTypeResolver according to its priority until the provided mime type is successfully mapped to its corresponding file extension. If no mapping is found for the mime type, null is returned for the file extension.
Included Mime Type Mappers
DDF Mime Type Mapper
The DDF Mime Type Mapper is the core implementation of the DDF Mime API. It provides access to all MimeTypeResolvers within DDF, which provide mapping of mime types to file extensions and file extensions to mime types.
Installing and Uninstalling
The DDF Mime Type Mapper is bundled in the mime-core feature, which is part of the mime-core-app application. This feature can be installed and uninstalled using the normal processes described in the Configuration section.
The mime-core feature is installed by default.
Configuring
There is no configuration for this feature.
Mime Type Resolver
A MimeTypeResolver is a DDF service that can map a file extension to its corresponding mime type and, conversely, can map a mime type to its file extension.
MimeTypeResolvers are assigned a priority (0-100, with the higher the number indicating the higher priority). This priority is used to sort all of the MimeTypeResolvers in the order they should be checked for mapping a file extension to a mime type (or vice versa). This priority also allows custom MimeTypeResolvers to be invoked before default MimeTypeResolvers if the custom resolver’s priority is set higher than the default’s.
MimeTypeResolvers are not typically invoked directly. Rather, the MimeTypeMapper maintains a list of MimeTypeResolvers (sorted by their priority) that it invokes to resolve a mime type to its file extension (or to resolve a file extension to its mime type).
Tika Mime Type Resolver
The TikaMimeTypeResolver is a MimeTypeResolver that is implemented using the Apache Tika open source product.
Using the Apache Tika content analysis toolkit, the TikaMimeTypeResolver provides support for resolving over 1300 mime types. (The tika-mimetypes.xml file that Apache Tika uses to define all of its default mime types that it supports is attached to this page.)
The TikaMimeTypeResolver is assigned a default priority of -1 to insure that it is always invoked last by the MimeTypeMapper. This insures that any custom MimeTypeResolvers that may be installed will be invoked before the TikaMimeTypeResolver.
Using
The TikaMimeTypeResolver provides the bulk of the default mime type support for DDF.
Installing and Uninstalling
The TikaMimeTypeResolver is bundled as the mime-tika-resolver feature in the mime-tika-app application. This feature can be installed and uninstalled using the normal processes described in the Configuring DDF section.
This feature is installed by default.
Configuring
There are no configuration properties for the mime-tika-resolver.
Implementation Details
Exported Services
| Registered Interface | Service Property | Value |
|---|---|---|
|
tika-mimetypes.xml |
Custom Mime Type Resolver
The Custom Mime Type Resolver is a MimeTypeResolver that defines the custom mime types that DDF will support out of the box. These are mime types not supported by the default TikaMimeTypeResolver.
Currently, the custom mime types supported by the Custom Mime Type Resolver that are configured for DDF out-of-the-box are:
| File Extension | Mime Type |
|---|---|
nitf |
image/nitf |
ntf |
image/nitf |
json |
json=application/json;id=geojson |
New custom mime type resolver mappings can be added using the Web Console.
As a MimeTypeResolver, the Custom Mime Type Resolver will provide methods to map the file extension to the corresponding mime type, and vice versa.
Using
The Custom Mime Type Resolver is used when mime types that are not supported by DDF out of the box need to be added. By adding custom mime type resolvers to DDF, new content with that mime type can be processed by DDF.
Installing and Uninstalling
One Custom Mime Type Resolver is configured and installed out of the box for the image/nitf mime type. This custom resolver is bundled in the mime-core-app application and is part of the mime-core feature. This feature can be installed and uninstalled using the normal processes described in the Configuration section.
Additional Custom Mime Type Resolvers can be added for other custom mime types.
Configuring
This component can be configured using the normal processes described in the Configuring DDF section.
The configurable properties for the Custom Mime Type Resolver are accessed from the MIME Custom Types configuration in the Web Console.
Managed Service Factory PID
-
DDF_Custom_Mime_Type_Resolver
| Title | Property | Type | Description | Default Value | Required |
|---|---|---|---|---|---|
Resolver Name |
name |
String |
Unique name for the custom mime type resolver. |
N/A |
Yes |
Priority |
priority |
Integer |
Execution priority of the resolver. Range is 0 to 100, with 100 being the highest priority. |
10 |
Yes |
File Extensions to Mime Types |
customMimeTypes |
String |
Comma-delimited list of key/value pairs where key is the file extension and value is the mime type, e.g., |
N/A |
Yes |
Implementation Details
| Registered Interface | Availability | Multiple |
|---|---|---|
|
optional |
true |
|
optional |
true |
|
optional |
true |
| Registered Interface | Service Property | Value |
|---|---|---|
|
|
|
|
|
Metrics Collection
The Metrics Collection collects data for all of the pre-configured metrics in DDF and stores them in custom JMX Management Bean (MBean) attributes. Samples of each metric’s data is collected every 60 seconds and stored in the <DDF_INSTALL_DIR>/data/metrics directory with each metric stored in its own .rrd file. Refer to the Metrics Reporting Application for how the stored metrics data can be viewed.
|
Do not remove the Also note that if DDF is uninstalled/re-installed that all existing metrics data will be permanently lost. |
The metrics currently being collected by DDF are:
| Metric | JMX MBean Name | MBean Attribute Name | Description |
|---|---|---|---|
Catalog Exceptions |
ddf.metrics.catalog:name=Exceptions |
Count |
A count of the total number of exceptions, of all types, thrown across all catalog queries executed. |
Catalog Exceptions Federation |
ddf.metrics.catalog:name=Exceptions.Federation |
Count |
A count of the total number of Federation exceptions thrown across all catalog queries executed. |
Catalog Exceptions Source Unavailable |
ddf.metrics.catalog:name=Exceptions.SourceUnavailable |
Count |
A count of the total number of SourceUnavailable exceptions thrown across all catalog queries executed. These exceptions occur when the source being queried is currently not available. |
Catalog Exceptions Unsupported Query |
ddf.metrics.catalog:name=Exceptions.UnsupportedQuery |
Count |
A count of the total number of UnsupportedQuery exceptions thrown across all catalog queries executed. These exceptions occur when the query being executed is not supported or is invalid. |
Catalog Ingest Created |
ddf.metrics.catalog:name=Ingest.Created |
Count |
A count of the number of catalog entries created in the Metadata Catalog. |
Catalog Ingest Deleted |
ddf.metrics.catalog:name=Ingest.Deleted |
Count |
A count of the number of catalog entries updated in the Metadata Catalog. |
Catalog Ingest Updated |
ddf.metrics.catalog:name=Ingest.Updated |
Count |
A count of the number of catalog entries deleted from the Metadata Catalog. |
Catalog Queries |
ddf.metrics.catalog:name=Queries |
Count |
A count of the number of queries attempted. |
Catalog Queries Comparison |
ddf.metrics.catalog:name=Queries.Comparison |
Count |
A count of the number of queries attempted that included a string comparison criteria as part of the search criteria, e.g., PropertyIsLike, PropertyIsEqualTo, etc. |
Catalog Queries Federated |
ddf.metrics.catalog:name=Queries.Federated |
Count |
A count of the number of federated queries attempted. |
Catalog Queries Fuzzy |
ddf.metrics.catalog:name=Queries.Fuzzy |
Count |
A count of the number of queries attempted that included a string comparison criteria with fuzzy searching enabled as part of the search criteria. |
Catalog Queries Spatial |
ddf.metrics.catalog:name=Queries.Spatial |
Count |
A count of the number of queries attempted that included a spatial criteria as part of the search criteria. |
Catalog Queries Temporal |
ddf.metrics.catalog:name=Queries.Temporal |
Count |
A count of the number of queries attempted that included a temporal criteria as part of the search criteria. |
Catalog Queries Total Results |
ddf.metrics.catalog:name=Queries.TotalResults |
Mean |
An average of the total number of results returned from executed queries. This total results data is averaged over the metric’s sample rate. |
Catalog Queries Xpath |
ddf.metrics.catalog:name=Queries.Xpath |
Count |
A count of the number of queries attempted that included a Xpath criteria as part of the search criteria. |
Catalog Resource Retrieval |
ddf.metrics.catalog:name=Resource |
Count |
A count of the number of products retrieved. |
Services Latency |
ddf.metrics.services:name=Latency |
Mean |
The response time (in milliseconds) from receipt of the request at the endpoint until the response is about to be sent to the client from the endpoint. This response time data is averaged over the metric’s sample rate. |
Source Metrics
Metrics are also collected on a per source basis for each configured Federated Source and Catalog Provider. When the source is configured, the metrics listed in the table below are automatically created. With each request that is either an enterprise query or a query that lists the source(s) to query these metrics are collected. When the source is deleted (or renamed), the associated metrics' MBeans and Collectors are also deleted. However, the RRD file in the data/metrics directory containing the collected metrics remain indefinitely and remain accessible from the Metrics tab in the Web Console.
In the table below, the metric name is based on the Source’s ID (indicated by <sourceId>).
| Metric | JMX MBean Name | MBean AttributeName | Description |
|---|---|---|---|
Source <sourceId> Exceptions |
ddf.metrics.catalog.source:name=<sourceId>.Exceptions |
Count |
A count of the total number of exceptions, of all types, thrown from catalog queries executed on this source. |
Source <sourceId> Queries |
ddf.metrics.catalog.source:name=<sourceId>.Queries |
Count |
A count of the number of queries attempted on this source. |
Source <sourceId> Queries Total Results |
ddf.metrics.catalog.source:name=<sourceId>.Queries.TotalResults |
Mean |
An average of the total number of results returned from executed queries on this source. This total results data is averaged over the metric’s sample rate. |
For example, if a Federated Source was created with a name of fs-1, then the following metrics would be created for it:
-
Source Fs1 Exceptions -
Source Fs1 Queries -
Source Fs1 Queries Total Results
If this federated source is then renamed to fs-1-rename, the MBeans and Collectors for the fs-1 metrics are deleted, and new MBeans and Collectors are created with the new names:
-
Source Fs1 Rename Exceptions -
Source Fs1 Rename Queries -
Source Fs1 Rename Queries Total Results
Note that the metrics with the previous name remain on the Metrics tab because the data collected while the Source had this name remains valid and thus needs to be accessible. Therefore, it is possible to access metrics data for sources renamed months ago, i.e., until DDF is reinstalled or the metrics data is deleted from the <DDF_INSTALL_DIR>/data/metrics directory. Also note that the source metrics' names are modified to remove all non-alphanumeric characters and renamed in camelCase.
Usage
The Metrics Collection is used when collection of historical metrics data, such as catalog query metrics, message latency, or individual sources' metrics type of data, is desired.
Install and Uninstall
The Metrics Collecting application is installed by default.
The catalog level metrics (packaged as the catalog-core-metricsplugin feature) can be installed and uninstalled using the normal processes described in the Configuration section.
Similarly, the source-level metrics (packaged as the catalog-core-sourcemetricsplugin feature) can be installed and uninstalled using the normal processes described in the Configuration section.
Configuration
No configuration is made for the Metrics Collecting application. All of the metrics that it collects data on are either pre-configured in DDF out of the box or dynamically created as sources are created or deleted.
Known Issues
None
Metrics Reporting Application
The DDF Metrics Reporting application provides access to historical data in a graphic, a comma-separated values file, a spreadsheet, a PowerPoint file, XML, and JSON formats for system metrics collected while DDF is running. Aggregate reports (weekly, monthly, and yearly) are also provided where all collected metrics are included in the report. Aggregate reports are available in Excel and PowerPoint formats.
Usage
The DDF Metrics Reporting application provides a web console plugin that adds a new tab to the Admin Console with the title of Metrics. When selected, the Metrics tab displays a list of all of the metrics being collected by DDF, e.g., Catalog Queries, Catalog Queries Federated, Catalog Ingest Created, etc.
With each metric in the list, a set of hyperlinks is displayed under each column. Each column’s header is displayed with the available time ranges. The time ranges currently supported are all measured from the time that the hyperlink is selected. They are 15 minutes, 1 hour, 1 day, 1 week, 1 month, 3 months, 6 months, and 1 year.
All metrics reports are generated by accessing the collected metric data stored in the <DDF_INSTALL_DIR>/data/metrics directory. All files in this directory are generated by the JmxCollector using RRD4J, a Round Robin Database for a Java open source product. All files in this directory will have the .rrd file extension and are binary files, hence they cannot be opened directly. These files should only be accessed using the Metrics tab’s hyperlinks. There is one RRD file per metric being collected. Each RRD file is sized at creation time and will never increase in size as data is collected. One year’s worth of metric data requires approximately 1 MB file storage.
|
Do not remove the Also note that if DDF is uninstalled/re-installed, all existing metrics data will be permanently lost. |
There is a hyperlink per format in which the metric’s historical data can be displayed. For example, the PNG hyperlink for 15m for the Catalog Queries metric maps to http://<DDF_HOST>:<DDF_PORT>/services/internal/metrics/catalogQueries.png?dateOffset=900, where the dateOffset=900 indicates the previous 900 seconds (15 minutes) to be graphed.
Note that the date format will vary according to the regional/locale settings for the server.
All of the metric graphs displayed are in PNG format and are displayed on their own page. The user may use the back button in the browser to return to the Admin Console, or, when selecting the hyperlink for a graph, they can use the right mouse button in the browser to display the graph in a separate browser tab or window, which will keep the Admin console displayed. The screen shot below is a sample graph of the Catalog Queries metrics data for the previous 15 minutes from when the link was selected. Note that the y-axis label and the title use the metrics name (Catalog Queries) by default. The average min and max of all of the metrics data is summarized in the lower left corner of the graph.
The user can also specify custom time ranges by adjusting the URL used to access the metric’s graph. The Catalog Queries metric data may also be graphed for a specific time range by specifying the startDate and endDate query parameters in the URL.
|
Note that the Metrics endpoint URL says "internal." This indicates that this endpoint is intended for internal use by the DDF code. This endpoint is likely to change in future versions; therefore, any custom applications built to make use of it, as described below, should be made with caution. |
For example, to map the Catalog Queries metric data for March 31, 6:00 am, to April 1, 2013, 11:00 am, (Arizona timezone, which is -07:00) the URL would be:
1
http://<DDF_HOST><DDF_PORT>/services/internal/metrics/catalogQueries.png?startDate=2013-03-31T06:00:00-07:00&endDate=2013-04-01T11:00:00-07:00
Or to view the last 30 minutes of data for the Catalog Queries metric, a custom URL with a dateOffset=1800 (30 minutes in seconds) could be used:
1
http://<DDF_HOST>:<DDF_PORT>/services/internal/metrics/catalogQueries.png?dateOffset=1800
The table below lists all of the options for the Metrics endpoint URL to execute custom metrics data requests:
| Parameter | Description | Example |
|---|---|---|
startDate |
Specifies the start of the time range of the search on the metric’s data (RFC-3339 - Date and Time format, i.e. YYYY-MM-DDTHH:mm:ssZ). Date/time must be earlier than the endDate. |
startDate=2013-03-31T06:00:00-07:00 |
endDate |
Specifies the endof the time range of the search on the metric’s data (RFC-3339 - Date and Time format, i.e. YYYY-MM-DDTHH:mm:ssZ). Date/time must be later than the startDate. |
endDate=2013-04-01T11:00:00-07:00 |
dateOffset |
Specifies an offset, backwards from the current time, to search on the modified time field for entries. Defined in seconds and must be a positive Integer. |
dateOffset=1800 |
yAxisLabel |
(optional) the label to apply to the graph’s y-axis. Will default to the metric’s name, e.g., Catalog Queries. |
Catalog Query Count |
title |
(optional) the title to be applied to the graph. Will default to the metric’s name plus the time range used for the graph. This parameter is only applicable for the metric’s graph display format. |
Catalog Query Count for the last 15 minutes |
Metric Data Supported Formats
The metric’s historical data can be displayed in several formats, including the PNG format previously mentioned, a CSV file, an Excel .xls file, a PowerPoint .ppt file, an XML file, and a JSON file. The PNG, CSV, and XLS formats are accessed via hyperlinks provided in the Metrics tab web page. The PPT, XML, and JSON formats are accessed by specifying the format in the custom URL, e.g., http://<DDF_HOST>:<DDF_PORT>/services/internal/metrics/catalogQueries.json?dateOffset=1800.
The table below describes each of the supported formats, how to access them, and an example where applicable. (NOTE: all example URLs begin with
http://<DDF_HOST>:<DDF_PORT>
which is omitted in the table for brevity.)
| Display Format | Description | How To Access | Example URL | ||
|---|---|---|---|---|---|
PNG |
Displays the metric’s data as a PNG-formatted graph, where the x-axis is time and the y-axis is the metric’s sampled data values. |
Via hyperlink on the Metrics tab or directly via custom URL. |
Accessing Catalog Queries metric data for last 8 hours (8 * 60 * 60 = 28800 seconds): /services/internal/metrics/catalogQueries.png?dateOffset=28800& yAxisLabel=my%20label&title=my%20graph%20title Accessing Catalog Queries metric data between 6:00 am on March 10, 2013, and 10:00 am on April 2, 2013: /services/internal/metrics/catalogQueries.png? startDate=2013-03-10T06:00:00-07:00&endDate=2013-04-02T10:00:00-07:00& yAxisLabel=my%20label&title=my%20graph%20title Note that the yAxisLabel and title parameters are optional. |
||
CSV |
Displays the metric’s data as a Comma-Separated Value (CSV) file, which can be auto-displayed in Excel based on browser settings. The generated CSV file will consist of two columns of data: Timestamp and Value, where the first row are the column headers and the remaining rows are the metric’s sampled data over the specified time range. |
Via hyperlink on the Metrics tab or directly via custom URL. |
Accessing Catalog Queries metric data for last 8 hours (8 * 60 * 60 = 28800 seconds): /services/internal/metrics/catalogQueries.csv?dateOffset=28800 Accessing Catalog Queries metric data between 6:00 am on March 10, 2013, and 10:00 am on April 2, 2013: /services/internal/metrics/catalogQueries.csv? startDate=2013-03-10T06:00:00-07:00&endDate=2013-04-02T10:00:00-07:00 |
||
XLS |
Displays the metric’s data as an Excel (XLS) file, which can be auto-displayed in Excel based on browser settings. The generated XLS file will consist of: Title in first row based on metric’s name and specified time range Column headers for Timestamp and Value; Two columns of data containing the metric’s sampled data over the specified time range; The total count, if applicable, in the last row |
Via hyperlink on the Metrics tab or directly via custom URL. |
Accessing Catalog Queries metric data for last 8 hours (8 * 60 * 60 = 28800 seconds): /services/internal/metrics/catalogQueries.xls?dateOffset=28800 Accessing Catalog Queries metric data between 6:00 am on March 10, 2013, and 10:00 am on April 2, 2013: /services/internal/metrics/catalogQueries.xls? startDate=2013-03-10T06:00:00-07:00&endDate=2013-04-02T10:00:00-07:00 |
||
PPT |
Displays the metric’s data as a PowerPoint (PPT) file, which can be auto-displayed in PowerPoint based on browser settings. The generated PPT file will consist of a single slide containing: A title based on the metric’s name; The metric’s PNG graph embedded as a picture in the slide The total count, if applicable |
Via custom URL only |
Accessing Catalog Queries metric data for last 8 hours (8 * 60 * 60 = 28800 seconds): /services/internal/metrics/catalogQueries.ppt?dateOffset=28800 Accessing Catalog Queries metric data between 6:00 am on March 10, 2013, and 10:00 am on April 2, 2013: /services/internal/metrics/catalogQueries.ppt? startDate=2013-03-10T06:00:00-07:00&endDate=2013-04-02T10:00:00-07:00 |
||
XML |
Displays the metric’s data as an XML-formatted file. |
via custom URL only |
Accessing Catalog Queries metric data for last 8 hours (8 * 60 * 60 = 28800 seconds): /services/internal/metrics/catalogQueries.xml?dateOffset=28800 Accessing Catalog Queries metric data between 6:00 am on March 10, 2013, and 10:00 am on April 2, 2013: /services/internal/metrics/catalogQueries.xml? startDate=2013-03-10T06:00:00-07:00&endDate=2013-04-02T10:00:00-07:00 Sample XML-formatted output would look like:
|
||
JSON |
Displays the metric’s data as an JSON-formatted file. |
via custom URL only |
Accessing Catalog Queries metric data for last 8 hours (8 * 60 * 60 = 28800 seconds): /services/internal/metrics/catalogQueries.json?dateOffset=28800 Accessing Catalog Queries metric data between 6:00 am on March 10, 2013, and 10:00 am on April 2, 2013: /services/internal/metrics/catalogQueries.json? startDate=2013-03-10T06:00:00-07:00&endDate=2013-04-02T10:00:00-07:00 Sample JSON-formatted output would look like:
|
Metrics Aggregate Reports
The Metrics tab also provides aggregate reports for the collected metrics. These are reports that include data for all of the collected metrics for the specified time range.
The aggregate reports provided are:
-
Weekly reports for each week up to the past four complete weeks from current time. A complete week is defined as a week from Monday through Sunday. For example, if current time is Thursday, April 11, 2013, the past complete week would be from April 1 through April 7.
-
Monthly reports for each month up to the past 12 complete months from current time. A complete month is defined as the full month(s) preceding current time. For example, if current time is Thursday, April 11, 2013, the past complete 12 months would be from April 2012 through March 2013.
-
Yearly reports for the past complete year from current time. A complete year is defined as the full year preceding current time. For example, if current time is Thursday, April 11, 2013, the past complete year would be 2012.
An aggregate report in XLS format would consist of a single workbook (spreadsheet) with multiple worksheets in it, where a separate worksheet exists for each collected metric’s data. Each worksheet would display:
-
the metric’s name and the time range of the collected data,
-
two columns: Timestamp and Value, for each sample of the metric’s data that was collected during the time range, and
-
a total count (if applicable) at the bottom of the worksheet.
An aggregate report in PPT format would consist of a single slideshow with a separate slide for each collected metric’s data. Each slide would display:
-
a title with the metric’s name,
-
the PNG graph for the metric’s collected data during the time range, and
-
a total count (if applicable) at the bottom of the slide.
Hyperlinks are provided for each aggregate report’s time range in the supported display formats, which include Excel (XLS) and PowerPoint (PPT). Aggregate reports for custom time ranges can also be accessed directly via the URL:
http://<DDF_HOST>:<DDF_PORT>/services/internal/metrics/report.<format>?startDate=<start_date_value>&endDate=<end_date_value>
where <format> is either xls or ppt and the <start_date_value> and <end_date_value> specify the custom time range for the report.
The table below list several examples for custom aggregate reports. (NOTE: all example URLs begin with:
http://<DDF_HOST>:<DDF_PORT>
which is omitted in the table for brevity.)
| Description | URL |
|---|---|
XLS aggregate report for March 15, 2013 to April 15, 2013 |
/services/internal/metrics/report.xls?startDate=2013-03-15T12:00:00-07:00&endDate=2013-04-15T12:00:00-07:00 |
XLS aggregate report for last 8 hours |
/services/internal/metrics/report.xls?dateOffset=28800 |
PPT aggregate report for March 15, 2013 to April 15, 2013 |
/services/internal/metrics/report.ppt?startDate=2013-03-15T12:00:00-07:00&endDate=2013-04-15T12:00:00-07:00 |
PPT aggregate report for last 8 hours |
/services/internal/metrics/report.ppt?dateOffset=28800 |
Add Custom Metrics to the Metrics Tab
It is possible to add custom (or existing, but non-collected) metrics to the Metrics tab by writing an application. Refer to the SDK example source code for Sample Metrics located in the DDF source code at sdk/sample-metrics and sdk/sdk-app.
|
The Metrics framework is not an open API, but rather a closed, internal framework that can change at any time in future releases. Be aware that any custom code written may not work with future releases. |
Install and Uninstall
The Metrics Reporting application can be installed and uninstalled using the normal processes described in the Configuring DDF section.
Configuration
No configuration can be made for the Metrics Reporting application. All of the metrics that it collects data on are pre-configured in DDF out of the box.
The metrics-reporting feature can only be installed and uninstalled. It is installed by default.
Known Issues
The Metrics Collecting Application uses a “round robin” database. It uses one that does not store individual values but, instead, stores the rate of change between values at different times. Due to the nature of this method of storage, along with the fact that some processes can cross time frames, small discrepancies (differences in values of one or two have been experienced) may appear in values for different time frames. These will be especially apparent for reports covering shorter time frames such as 15 minutes or one hour. These are due to the averaging of data over time periods and should not impact the values over longer periods of time.
Security Core API
The Security Core API contains all of the DDF Security Framework APIs that are used to perform security operations within DDF. More information on the APIs can be found on the Managing Web Service Security page.
Configuration
None
Install and Uninstall
The Security Core App installs this bundle by default. Do not uninstall the Security Core API as it is integral to system function and is depended on by all of the other security services.
Implementation Details
Imported Services
None
Exported Services
None
Compression Services
The compression services offer CXF-based message encoding that allows for compression of outgoing and incoming messages.
Configuration
None
Install and Uninstall
The compression services are not installed by default within the platform application. Installing them can be done by doing:
1
features:install compression-[DESIRED COMPRESSION SERVICE]
Where [DESIRED COMPRESSION SERVICE] is one of the following:
| Compression Type | Description |
|---|---|
exi |
Adds Efficient XML Interchange (EXI) support to outgoing responses. EXI is an W3C standard for XML encoding that shrinks xml to a smaller size than normal GZip compression. More information is available at http://www.w3.org/XML/EXI/ |
gzip |
Adds GZip compression to in and outgoing messages through CXF components. Code comes with CXF. |
|
Due to the way CXF features work, the compression services either need to be installed BEFORE the desired CXF service is started or the CXF service needs to be refreshed / restarted after the compression service is installed. |
Implementation Details
Imported Services
None
Exported Services
| Registered Interface | Implemented Class(es) | Service Property | Value |
|---|---|---|---|
org.apache.cxf.feature.Feature |
ddf.compression.exi.EXIFeature org.apache.cxf.transport.common.gzip.GZIPFeature |
N/A |
N/A |
Overview
The Security application provides authentication, authorization, and auditing services for the DDF. They comprise both a framework that developers and integrators can extend and a reference implementation that meets security requirements. More information about the security framework and how everything works as a single security solution can be found on the Managing Web Service Security page.
This guide supports integration of this application with external frameworks.
Security CAS
The Security CAS app contains all of the services and implementations needed to integrate with the Central Authentication Server (CAS).
Information on setting up and configuring the CAS server is located on the CAS SSO Configuration page.
Components
| Bundle Name | Feature Located In | Description/Link to Bundle Page |
|---|---|---|
security-cas-client |
security-cas-client |
Security CAS Client |
security-cas-impl |
security-cas-client |
Security CAS Implementation |
security-cas-tokenvalidator |
security-cas-tokenvalidator |
Security CAS Token Validator |
security-cas-cxfservletfilter |
security-cas-cxfservletfilter |
Security CAS CXF Servlet Filter |
security-cas-server |
|
Security CAS Server |
Security CAS Client
The Security CAS client bundle contains client files needed by components that are performing authentication with CAS. This includes setting up the CAS SSO servlet filters and starting a callback service that is needed to request proxy tickets from CAS.
Installation
This bundle is not installed by default and can be added by installing the security-cas-client feature.
Configuration
| Configuration Name | Default Value | Additional Description |
|---|---|---|
Server Name |
This is the name of the server that is calling CAS. The URL is used during CAS redirection to redirect back to the calling server. |
|
CAS Server URL |
The main URL to the CAS Web application. |
|
CAS Server Login URL |
URL to the login page of CAS (generally ends in /login) |
|
Proxy Callback URL |
Full URL of the callback service that CAS hits to create proxy tickets. |
|
Proxy Receptor URL |
/sso |
|
Implementation Details
Imported Services
None
Exported Services
| Registered Interface | Implementation Class | Properties Set |
|---|---|---|
javax.servlet.Filter |
ddf.security.cas.client.ProxyFilter |
CAS Filters |
Security CAS Implementation
The Security CAS implementation bundle contains CAS-specific implementations of classes from the Security Core API. Inside this bundle is the ddf.security.service.impl.cas.CasAuthenticationToken class. It is an implementation of the AuthenticationToken class that is used to pass Authentication Credentials to the Security Framework.
Configuration
None.
Implementation Details
Imported Services
None
Exported Services
None
Security CAS Server
The Security CAS Server project creates a web application (.war) file that is configured to be deployed to a tomcat application server. Information on installing and configuring it within tomcat is available on the CAS SSO Configuration page.
Configuration
N/A - Not a bundle
Implementation Details
N/A - Not a bundle
Security CAS Token Validator
The Security CAS TokenValidator bundle exports a TokenValidator service that is called by the STS to validate CAS proxy tickets.
Installation
This bundle is not installed by default and can be added by installing the security-cas-tokenvalidator feature.
Configuration
Settings
| Configuration Name | Default Value | Additional Description |
|---|---|---|
CAS Server URL |
The hostname in the URL should match the hostname alias defined within the certificate that CAS is using for SSL communication. |
Implementation Details
Imported Services
| Registered Interface | Availability | Multiple |
|---|---|---|
ddf.security.encryption.EncryptionService |
required |
false |
Exported Services
| Registered Interfaces | Implementation Class | Properties Set |
|---|---|---|
ddf.catalog.util.DdfConfigurationWatcher org.apache.cxf.sts.token.validator.TokenValidator |
ddf.security.cas.WebSSOTokenValidator |
CAS Server URL and Encryption Service reference |
Security CAS CXF Servlet Filter
The Security CAS CXF Servlet Filter bundle binds a list of CAS servlet filters to the CXF servlet. The servlet filters are defined by by the security-cas-client bundle.
Installation
This bundle is not installed by default and can be added by installing the security-cas-cxfservletfilter feature.
Configuration
Settings
| Configuration Name | Default Value | Additional Description |
|---|---|---|
URL Pattern |
/services/catalog/* |
This defines the servlet URL that should be binded to the CAS filter. By default, they will bind to the REST and OpenSearch endpoints. The REST endpoint is called by the SearchUI when accessing individual metadata about a metacard and when accessing the metacard’s thumbnail. An example of just securing the OpenSearch endpoint would be the value: |
|
Endpoints that are secured by the CXF Servlet Filters will not currently work with federation. With the default settings, REST and OpenSearch federation to the site with this feature installed will not work. Federation from this site, however, will work normally. |
Implementation Details
Imported Services
| Registered Interface | Availability | Multiple |
|---|---|---|
javax.servlet.Filter |
required |
false |
Exported Services
None (filter is exported inside the code and not via configuration)
Security Core
The Security Core app contains all of the necessary components that are used to perform security operations (authentication, authorization, and auditing) required in the framework.
Components
| Bundle Name | Located in Feature | Description / Link to Bundle Page |
|---|---|---|
security-core-api |
security-core |
Security Core API |
security-core-impl |
security-core |
Security Core Implementation |
security-core-commons |
security-core |
Security Core Commons |
Security Core Commons
The Security Core Commons bundle contains helper and utility classes that are used within DDF to help with performing common security operations. Most notably, this bundle contains the ddf.security.common.audit.SecurityLogger class that performs the security audit logging within DDF.
Configuration
None
Implementation Details
Imported Services
None
Exported Services
None
Security Core Implementation
The Security Core Implementation contains the reference implementations for the Security Core API interfaces that come with the DDF distribution.
Configuration
None
Install and Uninstall
The Security Core app installs this bundle by default. It is recommended to use this bundle as it contains the reference implementations for many classes used within the DDF Security Framework.
Implementation Details
Imported Services
| Registered Interface | Availability | Multiple |
|---|---|---|
org.apache.shiro.realm.Realm |
optional |
true |
Exported Services
| Registered Interface | Implementation Class | Properties Set |
|---|---|---|
ddf.security.service.SecurityManager |
ddf.security.service.impl.SecurityManagerImpl |
None |
Security Encryption
The DDF Security Encryption application offers an encryption framework and service implementation for other applications to use. This service is commonly used to encrypt and decrypt default passwords that are located within the metatype and Administration Web Console.
Components
| Bundle Name | Feature Located In | Description/Link to Bundle Page |
|---|---|---|
security-encryption-api |
security-encryption |
Security Encryption API |
security-encryption-impl |
security-encryption |
Security Encryption Implementation |
security-encryption-commands |
security-encryption |
Security Encryption Commands |
Security Encryption API
The Security Encryption API bundle provides the framework for the encryption service. Applications that use the encryption service should import this bundle and use the interfaces defined within it instead of calling an implementation directly.
Installation
This bundle is installed by default as part of the security-encryption feature. Many applications that come with DDF depend on this bundle and it should not be uninstalled.
Configuration
Settings
None
Implementation Details
Imported Services
None
Exported Services
None
Security Encryption Commands
The Security Encryption Commands bundle enhances the DDF system console by allowing administrators and integrators to encrypt and decrypt values directly from the console. More information and sample commands are available on the Encryption Service page.
Installation
This bundle is installed by default by the security-encryption feature. This bundle is tied specifically to the DDF console and can be uninstalled without causing any issues to other applications. When uninstalled, administrators will not be able to encrypt and decrypt data from the console.
Configuration
Settings
None
Implementation Details
Imported Services
None
Exported Services
None
Security Encryption Implementation
The Security Encryption Implementation bundle contains all of the service implementations for the Encryption Framework and exports those implementations as services to the OSGi service registry.
Installation
This bundle is installed by default as part of the security-encryption feature. Other projects are dependent on the services this bundle exports and it should not be uninstalled unless another security service implementation is being added.
Configuration
Settings
None
Implementation Details
Imported Services
None
Exported Services
| Registered Interface | Implementation Class | Properties Set |
|---|---|---|
ddf.security.encryption.EncryptionService |
ddf.security.encryption.impl.EncryptionServiceImpl |
Key |
Security LDAP
The DDF LDAP application allows the user to configure either an embedded or a standalone LDAP server. The provided features contain a default set of schemas and users loaded to help facilitate authentication and authorization testing.
Components
| Bundle Name | Feature Located In | Description/Link to Bundle Page |
|---|---|---|
ldap-embedded |
ldap |
Embedded LDAP Configuration |
Configuring a Standalone LDAP Server
In some production environments it is suggested that the LDAP server be run separate from the DDF installation. Due to the minimal number of dependencies that the embedded LDAP application requires, this app can be run using a minimal install of DDF that uses much less memory and CPU than a standard installation.
Run a Standalone Embedded LDAP Instance
-
Obtain and unzip the DDF kernel (
ddf-distribution-kernel-<VERSION>.zip). -
Start the distribution.
-
When the kernel is loaded up with the DDF logo at the command prompt, execute:
la
which is short for "list all."
|
Since the kernel does not include all apps, if you were to do a “list” instead of "`la`," no results would be returned at this point. |
-
Verify that all bundles are
Active. -
Deploy the Embedded LDAP app by copying the
ldap-embedded-app-<VERSION>.karfile into the<DISTRIBUTION_HOME>/deploydirectory. You can verify that theLDAP server is installed by checking the DDF log or by performing anlaand verifying that the OpenDJ bundle is in the Active state. Additionally, it should be responding to LDAP requests on the default ports, 1389 and 1636. -
To perform any of the configurations identified below, the web console will need to be installed by executing:
features:install webconsole
Configuration
The configuration options are located on the standard DDF configuration web console under the title LDAP Server. It currently contains three configuration options.
| Configuration Name | Description |
|---|---|
LDAP Port |
Sets the port for LDAP (plaintext and StartTLS). 0 will disable the port. |
LDAPS Port |
Sets the port for LDAPS. 0 will disable the port. |
Base LDIF File |
Location on the server for a LDIF file. This file will be loaded into the LDAP and overwrite any existing entries. This option should be used when updating the default groups/users with a new LDIF file for testing. The LDIF file being loaded may contain any LDAP entries (schemas, users, groups, etc.). If the location is left blank, the default base LDIF file will be used that comes with DDF. |
Trust Certificates
For LDAPS to function correctly, it is important that the LDAP server is configured with a keystore file that trusts the clients it is connecting to and vice versa. Complete the following procedure to provide your own keystore information for the LDAP.
-
Navigate to the
/etc/keystoresfolder in the kernel distribution folder. -
Find the
serverKeystore.jksfile and replace it with a keystore file valid for your operating environment. -
If the DDF kernel is running, restart it so the changes will take place
Connect to a Standalone LDAP Server
DDF instances can connect to an external LDAP server by installing and configuring the security-sts-server feature detailed here.
Embedded LDAP Configuration
The Embedded LDAP application contains an LDAP server (OpenDJ version 2.4.6) that has a default set of schemas and users loaded to help facilitate authentication and authorization testing.
Default Settings
Ports
| Protocol | Default Port |
|---|---|
LDAP |
1389 |
LDAPS |
1636 |
StartTLS |
1389 |
Users
LDAP Users
| Username | Password | Groups | Description |
|---|---|---|---|
testuser1 |
password1 |
General test user for authentication |
|
testuser2 |
password2 |
|
General test user for authentication |
nromanova |
password1 |
avengers |
General test user for authentication |
lcage |
password1 |
admin, avengers |
General test user for authentication, Admin user for karaf |
jhowlett |
password1 |
admin, avengers |
General test user for authentication, Admin user for karaf |
pparker |
password1 |
admin, avengers |
General test user for authentication, Admin user for karaf |
jdrew |
password1 |
admin, avengers |
General test user for authentication, Admin user for karaf |
tstark |
password1 |
admin, avengers |
General test user for authentication, Admin user for karaf |
bbanner |
password1 |
admin, avengers |
General test user for authentication, Admin user for karaf |
srogers |
password1 |
admin, avengers |
General test user for authentication, Admin user for karaf |
admin |
admin |
admin |
Admin user for karaf |
LDAP Admin
| Username | Password | Groups | Attributes | Description |
|---|---|---|---|---|
admin |
secret |
Administrative User for LDAP |
Schemas
The default schemas loaded into the LDAP instance are the same defaults that come with OpenDJ.
| Schema File Name | Schema Description (http://opendj.forgerock.org/doc/admin-guide/index/chap-schema.html) |
|---|---|
00-core.ldif |
This file contains a core set of attribute type and objectlass definitions from several standard LDAP documents, including draft-ietf-boreham-numsubordinates, draft-findlay-ldap-groupofentries, draft-furuseth-ldap-untypedobject, draft-good-ldap-changelog, draft-ietf-ldup-subentry, draft-wahl-ldap-adminaddr, RFC 1274, RFC 2079, RFC 2256, RFC 2798, RFC 3045, RFC 3296, RFC 3671, RFC 3672, RFC 4512, RFC 4519, RFC 4523, RFC 4524, RFC 4530, RFC 5020, and X.501. |
01-pwpolicy.ldif |
This file contains schema definitions from draft-behera-ldap-password-policy, which defines a mechanism for storing password policy information in an LDAP directory server. |
02-config.ldif |
This file contains the attribute type and objectclass definitions for use with the directory server configuration. |
03-changelog.ldif |
This file contains schema definitions from draft-good-ldap-changelog, which defines a mechanism for storing information about changes to directory server data. |
03-rfc2713.ldif |
This file contains schema definitions from RFC 2713, which defines a mechanism for storing serialized Java objects in the directory server. |
03-rfc2714.ldif |
This file contains schema definitions from RFC 2714, which defines a mechanism for storing CORBA objects in the directory server. |
03-rfc2739.ldif |
This file contains schema definitions from RFC 2739, which defines a mechanism for storing calendar and vCard objects in the directory server. Note that the definition in RFC 2739 contains a number of errors, and this schema file has been altered from the standard definition in order to fix a number of those problems. |
03-rfc2926.ldif |
This file contains schema definitions from RFC 2926, which defines a mechanism for mapping between Service Location Protocol (SLP) advertisements and LDAP. |
03-rfc3112.ldif |
This file contains schema definitions from RFC 3112, which defines the authentication password schema. |
03-rfc3712.ldif |
This file contains schema definitions from RFC 3712, which defines a mechanism for storing printer information in the directory server. |
03-uddiv3.ldif |
This file contains schema definitions from RFC 4403, which defines a mechanism for storing UDDIv3 information in the directory server. |
04-rfc2307bis.ldif |
This file contains schema definitions from the draft-howard-rfc2307bis specification, used to store naming service information in the directory server. |
05-rfc4876.ldif |
This file contains schema definitions from RFC 4876, which defines a schema for storing Directory User Agent (DUA) profiles and preferences in the directory server. |
05-samba.ldif |
This file contains schema definitions required when storing Samba user accounts in the directory server. |
05-solaris.ldif |
This file contains schema definitions required for Solaris and OpenSolaris LDAP naming services. |
06-compat.ldif |
This file contains the attribute type and objectclass definitions for use with the directory server configuration. |
Configuration
Start and Stop
The embedded LDAP application installs a feature with the name ldap-embedded. Installing and uninstalling this feature will start and stop the embedded LDAP server. This will also install a fresh instance of the server each time. If changes need to persist, stop then start the embedded-ldap-opendj bundle (rather than installing/uninstalling the feature).
All settings, configurations, and changes made to the embedded LDAP instances are persisted across DDF restarts. If DDF is stopped while the LDAP feature is installed and started, it will automatically restart with the saved settings on the next DDF start.
Settings
The configuration options are located on the standard DDF configuration web console under the title LDAP Server. It currently contains three configuration options.
| Configuration Name | Description |
|---|---|
LDAP Port |
Sets the port for LDAP (plaintext and StartTLS). 0 will disable the port. |
LDAPS Port |
Sets the port for LDAPS. 0 will disable the port. |
Base LDIF File |
Location on the server for a LDIF file. This file will be loaded into the LDAP and overwrite any existing entries. This option should be used when updating the default groups/users with a new ldif file for testing. The LDIF file being loaded may contain any ldap entries (schemas, users, groups..etc). If the location is left blank, the default base LDIF file will be used that comes with DDF. |
Limitations
Current limitations for the embedded LDAP instances include:
-
Inability to store the LDAP files/storage outside of the DDF installation directory. This results in any LDAP data (i.e., LDAP user information) being lost when the
ldap-embeddedfeature is uninstalled. -
Cannot be run standalone from DDF. In order to run embedded-ldap, the DDF must be started.
External Links
Location to the default base LDIF file in the DDF source code: https://github.com/codice/ddf/blob/master/ldap/embedded/ldap-embedded-opendj/src/main/resources/default-users.ldif
OpenDJ documentation: http://opendj.forgerock.org/docs.html
LDAP Administration
OpenDJ provides a number of tools for LDAP administration. Refer to the OpenDJ Admin Guide (http://opendj.forgerock.org/opendj-server/doc/admin-guide/).
Download the Admin Tools
OpenDJ (Version 2.4.6) and the included tool suite can be downloaded at http://www.forgerock.org/opendj-archive.html.
Use the Admin Tools
The admin tools are located in <opendj-installation>/bat for Windows and <opendj-installation>/bin for nix. These tools can be used to administer both local and remote LDAP servers by setting the *host and port parameters appropriately.
Example Commands for Disabling/Enabling a User’s Account
In this example, the user Bruce Banner (uid=bbanner) is disabled using the manage-account command on Windows. Run manage-account --help for usage instructions.
D:\OpenDJ-2.4.6\bat>manage-account set-account-is-disabled -h localhost -p 4444 -O true
-D "cn=admin" -w secret -b "uid=bbanner,ou=users,dc=example,dc=com"
The server is using the following certificate:
Subject DN: CN=Win7-1, O=Administration Connector Self-Signed Certificate
Issuer DN: CN=Win7-1, O=Administration Connector Self-Signed Certificate
Validity: Wed Sep 04 15:36:46 MST 2013 through Fri Sep 04 15:36:46 MST 2015
Do you wish to trust this certificate and continue connecting to the server?
Please enter "yes" or "no":yes
Account Is Disabled: true
Verify the Account is Disabled
Notice Account Is Disabled: true in the listing.
D:\OpenDJ-2.4.6\bat>manage-account get-all -h localhost -p 4444 -D "cn=admin" -w secret
-b "uid=bbanner,ou=users,dc=example,dc=com"
The server is using the following certificate:
Subject DN: CN=Win7-1, O=Administration Connector Self-Signed Certificate
Issuer DN: CN=Win7-1, O=Administration Connector Self-Signed Certificate
Validity: Wed Sep 04 15:36:46 MST 2013 through Fri Sep 04 15:36:46 MST 2015
Do you wish to trust this certificate and continue connecting to the server?
Please enter "yes" or "no":yes
Password Policy DN: cn=Default Password Policy,cn=Password Policies,cn=config
Account Is Disabled: true
Account Expiration Time:
Seconds Until Account Expiration:
Password Changed Time: 19700101000000.000Z
Password Expiration Warned Time:
Seconds Until Password Expiration:
Seconds Until Password Expiration Warning:
Authentication Failure Times:
Seconds Until Authentication Failure Unlock:
Remaining Authentication Failure Count:
Last Login Time:
Seconds Until Idle Account Lockout:
Password Is Reset: false
Seconds Until Password Reset Lockout:
Grace Login Use Times:
Remaining Grace Login Count: 0
Password Changed by Required Time:
Seconds Until Required Change Time:
Password History:
Enable the Account
D:\OpenDJ-2.4.6\bat>manage-account clear-account-is-disabled -h localhost -p 4444 -D
"cn=admin" -w secret -b "uid=bbanner,ou=users,dc=example,dc=com"
The server is using the following certificate:
Subject DN: CN=Win7-1, O=Administration Connector Self-Signed Certificate
Issuer DN: CN=Win7-1, O=Administration Connector Self-Signed Certificate
Validity: Wed Sep 04 15:36:46 MST 2013 through Fri Sep 04 15:36:46 MST 2015
Do you wish to trust this certificate and continue connecting to the server?
Please enter "yes" or "no":yes
Account Is Disabled: false
Verify the Account is Enabled
Notice Account Is Disabled: false in the listing.
D:\OpenDJ-2.4.6\bat>manage-account get-all -h localhost -p 4444 -D "cn=admin" -w secret
-b "uid=bbanner,ou=users,dc=example,dc=com"
The server is using the following certificate:
Subject DN: CN=Win7-1, O=Administration Connector Self-Signed Certificate
Issuer DN: CN=Win7-1, O=Administration Connector Self-Signed Certificate
Validity: Wed Sep 04 15:36:46 MST 2013 through Fri Sep 04 15:36:46 MST 2015
Do you wish to trust this certificate and continue connecting to the server?
Please enter "yes" or "no":yes
Password Policy DN: cn=Default Password Policy,cn=Password Policies,cn=config
Account Is Disabled: false
Account Expiration Time:
Seconds Until Account Expiration:
Password Changed Time: 19700101000000.000Z
Password Expiration Warned Time:
Seconds Until Password Expiration:
Seconds Until Password Expiration Warning:
Authentication Failure Times:
Seconds Until Authentication Failure Unlock:
Remaining Authentication Failure Count:
Last Login Time:
Seconds Until Idle Account Lockout:
Password Is Reset: false
Seconds Until Password Reset Lockout:
Grace Login Use Times:
Remaining Grace Login Count: 0
Password Changed by Required Time:
Seconds Until Required Change Time:
Password History:
Security PEP
The DDF Security PEP application contains bundles and services that enable service and metacard authorization. These two types of authorization can be installed separately and extended with custom services.
Components
| Bundle Name | Located in Feature | Description/Link to Bundle Page |
|---|---|---|
security-pep-interceptor |
security-pep-serviceauthz |
Security PEP Interceptor |
security-pep-redaction |
security-pep-redaction |
Security PEP Redaction |
Security PEP Interceptor
The Security PEP Interceptor bundle contains the ddf.security.pep.interceptor.EPAuthorizingInterceptor class. This class uses CXF to intercept incoming SOAP messages and enforces service authorization policies by sending the service request to the security framework.
Installation
This bundle is not installed by default and can be added by installing the security-pep-serviceauthz feature.
|
To perform service authorization within a default install of DDF, this bundle MUST be installed. |
Configuration
Settings
None
Implementation Details
Imported Services
None
Exported Services
None
Security PEP Redaction
The Security PEP Redaction bundle contains a redaction plugin that is added as a post query plugin in the DDF query lifecycle. This plugin looks at the security attributes on the metacard and compares them to the security attributes on the user who made the query request. If they do not match, the plug in will, depending on the configuration, filter the metacard out of the results or redact certain parts of the metacard that the user does not have permission to see.
Installation
This bundle is not installed by default and can be added by installing the security-pep-redaction feature.
Configuration
None
Implementation Details
Imported Services
None
Exported Services
| Registered Interface | Implementation Class | Properties Set |
|---|---|---|
ddf.catalog.plugin.PostQueryPlugin |
ddf.security.pep.redaction.plugin.RedactionPlugin |
None |
Security STS
The Security STS application contains the bundles and services necessary to run and talk to a Security Token Service (STS). It builds off of the Apache CXF STS code and adds components specific to DDF functionality.
Components
| Bundle Name | Located in Feature | Description/Link to Bundle Page |
|---|---|---|
security-sts-clientconfig |
security-sts-realm |
Security STS Client Config |
security-sts-realm |
security-sts-realm |
Security STS Realm |
security-sts-ldaplogin |
security-sts-ldaplogin |
Security STS LDAP Login |
security-sts-ldapclaimshandler |
security-sts-server |
Security STS LDAP Claims Handler |
security-sts-server |
security-sts-server |
Security STS Server |
security-sts-samlvalidator |
security-sts-server |
Contains the default CXF SAML validator, exposes it as a service for the STS. |
security-sts-x509validator |
security-sts-server |
Contains the default CXF x509 validator, exposes it as a service for the STS. |
Security STS Client Config
The DDF Security STS Client Config bundle keeps track and exposes configurations and settings for the CXF STS client. This client can be used by other services to create their own STS client. Once a service is registered as a watcher of the configuration, it will be updated whenever the settings change for the sts client.
Installation
This bundle is not installed by default and can be added by installing the security-sts-realm feature.
Configuration
Settings
Settings can be found in the web console under Configuration → Security STS Client.
| Configuration Name | Default Value | Additional Information |
|---|---|---|
STS Address |
The hostname of the remote server should match the certificate that the server is using. |
|
STS Endpoint Name |
{http://docs.oasis-open.org/ws-sx/ws-trust/200512/}STS_Port |
|
STS Service Name |
{http://docs.oasis-open.org/ws-sx/ws-trust/200512/}SecurityTokenService |
|
Signature Properties |
etc/ws-security/client/signature.properties |
|
Encryption Properties |
etc/ws-security/client/encryption.properties |
|
STS Properties |
etc/ws-security/client/signature.properties |
|
Claims |
<List of Claims> |
|
Implementation Details
Imported Services
| Registered Interface | Availability | Multiple |
|---|---|---|
ddf.catalog.DdfConfigurationWatcher |
required |
true |
org.osgi.service.cm.ConfigurationAdmin |
required |
false |
Exported Services
None
External/WS-S STS Support
Security STS WSS
This configuration works just like the STS Client Config for the internal STS, but produces standard requests instead of the custom DDF ones. It supports two new auth types for the context policy manager, WSSBASIC and WSSPKI.
Security STS Address Provider
This allows one to select which STS address will be used (e.g. in SOAP sources) for clients of this service. Default is off (internal).
Security STS LDAP Claims Handler
The DDF Security STS LDAP Claims Handler bundle adds functionality to the STS server that allows it to retrieve claims from an LDAP server. It also adds mappings for the LDAP attributes to the STS SAML claims.
Installation
This bundle is not installed by default and can be added by installing the
security-sts-server
feature.
Configuration
Settings
Settings can be found in the web console under Configuration → Security STS LDAP and Roles Claims Handler.
| Configuration Name | Default Value | Additional Information |
|---|---|---|
LDAP URL |
ldap://localhost:1389 |
|
LDAP Bind User DN |
cn=admin |
|
LDAP Bind User Password |
secret |
This password value is encrypted by default using the Security Encryption application. |
LDAP Username Attribute |
uid |
|
LDAP Base User DN |
ou=users,dc=example,dc=com |
|
LDAP Group ObjectClass |
groupOfNames |
ObjectClass that defines structure for group membership in LDAP. Usually this is groupOfNames or groupOfUniqueNames |
LDAP Membership Attribute |
member |
Attribute used to designate the user’s name as a member of the group in LDAP. Usually this is member or uniqueMember |
LDAP Base Group DN |
ou=groups,dc=example,dc=com |
|
User Attribute Map File |
etc/ws-security/attributeMap.properties |
Properties file that contains mappings from Claim=LDAP attribute. |
Implementation Details
Imported Services
| Registered Interface | Availability | Multiple |
|---|---|---|
ddf.security.encryption.EncryptionService |
optional |
false |
Exported Services
| Registered Interface | Implementation Class | Properties Set |
|---|---|---|
org.apache.cxf.sts.claims.ClaimsHandler |
ddf.security.sts.claimsHandler.LdapClaimsHandler |
Properties from the settings |
org.apache.cxf.sts.claims.ClaimsHandler |
ddf.security.sts.claimsHandler.RoleClaimsHandler |
Properties from the settings |
Security STS LDAP Login
The DDF Security STS LDAP Login bundle enables functionality within the STS that allows it to use an LDAP to perform authentication when passed a UsernameToken in a RequestSecurityToken SOAP request.
Installation
This bundle is not installed by default and can be added by installing the security-sts-ldaplogin feature.
Configuration
Settings
Configuration settings can be found in the web console under Configuration → Security STS LDAP Login.
| Configuration Name | Default Value | Additional Information |
|---|---|---|
LDAP URL |
ldaps://localhost:1636 |
|
LDAP Bind User DN |
cn=admin |
|
LDAP Bind User Password |
secret |
This password value is encrypted by default using the Security Encryption application. |
LDAP Username Attribute |
uid |
|
LDAP Base User DN |
ou=users,dc=example,dc=com |
|
LDAP Base Group DN |
ou=groups,dc=example,dc=com |
|
SSL Keystore Alias |
server |
This alias is used when connecting to the LDAP using SSL (LDAPS). |
Implementation Details
Imported Services
None
Exported Services
None
Security STS Realm
The DDF Security STS Realm performs authentication of a user by delegating the authentication request to an STS. This is different than the realms located within the Security PDP application as those ones only perform authorization and not authentication.
Installation
This bundle is installed by default and should not be uninstalled unless the security framework is not being used.
Configuration
Settings
None
Implementation Details
Imported Services
| Registered Interface | Availability | Multiple |
|---|---|---|
ddf.security.encryption.EncryptionService |
opt |
false |
Exported Services
| Registered Interfaces | Implementation Class | Properties Set |
|---|---|---|
ddf.catalog.util.DdfConfigurationWatcher org.apache.shiro.realm.Realm |
ddf.security.realm.sts.StsRealm |
None |
Security STS Server
The DDF Security STS Server is a bundle that starts up an implementation of the CXF STS. The STS obtains many of its configurations (Claims Handlers, Token Validators, etc.) from the OSGi service registry as those items are registered as services using the CXF interfaces. The various services that the STS Server imports are listed in the Implementation Details section of this page.
|
The WSDL for the STS is located at the |
Installation
This bundle is not installed by default and can be added by installing the security-sts-server feature.
Configuration
Settings
Configuration settings can be found in the web console under Configuration → Security STS Server.
| Configuration Name | Default Value | Additional Information |
|---|---|---|
SAML Assertion Lifetime |
1800 |
|
Token Issuer |
localhost |
The name of the server issuing tokens. Generally this is the cn or hostname of this machine on the network. |
Signature Username |
localhost |
Alias of the private key in the STS Server’s keystore used to sign messages. |
Encryption Username |
localhost |
Alias of the private key in the STS Server’s keystore used to encrypt messages. |
Implementation Details
Imported Services
| Registered Interface | Availability | Multiple |
|---|---|---|
org.apache.cxf.sts.claims.ClaimsHandler |
optional |
true |
org.apache.cxf.sts.token.validator.TokenValidator |
optional |
true |
Exported Services
None
Security PDP
The DDF Security PDP application contains services that are able to perform authorization decisions based on configurations and policies. In the DDF Security Framework, these components are called realms, and they implement the org.apache.shiro.realm.Realm and org.apache.shiro.authz.Authorizer interfaces. Although these components perform decisions on access control, enforcement of this decision is performed by components within the Security PEP application.
Components
| Bundle Name | Located in Feature | Description/Link to Bundle Page |
|---|---|---|
security-pdp-xacmlrealm |
security-pdp-xacml |
Security PDP XACML Realm |
security-pdp-authzrealm |
security-pdp-simple |
Security PDP AuthZ Realm |
Security PDP AuthZ Realm
The DDF Security PDP AuthZ Realm exposes a realm service that makes decisions on authorization requests using the attributes stored within the metacard to determine if access should be granted. Unlike the Security PDP XACML Realm, this realm does not use XACML and does not delegate decisions to an external processing engine. Decisions are made based on "match-all" and "match-one" logic. The configuration below provides the mapping between user attributes and metacard attributes - one map exists for each type of mapping (each map may contain multiple values).
-
Match-All Mapping: This mapping is used to guarantee that all values present in the specified metacard attribute exist in the corresponding user attribute.
-
Match-One Mapping: This mapping is used to guarantee that at least one of the values present in the specified metacard attribute exists in the corresponding user attribute.
Installation
This bundle is not installed by default and can be added by installing the security-pdp-java feature.
Configuration
Settings
Settings can be found in the web console under Configuration → Security Simple AuthZ Realm.
| Configuration Name | Default Value | Additional Description |
|---|---|---|
Roles |
admin |
Add all the roles that allow access to restricted actions. Any user that has any one of these roles will be allowed access to restricted actions. |
Open Action List |
Add any actions that will not be restricted by role. Any action listed here will automatically be allowed to be performed by any user in any role. |
|
Match-All Mappings |
|
These map user attributes to metacard security attributes to be used in "Match All" checking. All the values in the metacard attribute must be present in the user attributes in order to "pass" and allow access. These attribute names are case-sensitive. |
Match-One Mappings |
These map user attributes to metacard security attributes to be used in "Match One" checking. At least one of the values from the metacard attribute must be present in the corresponding user attribute to "pass" and allow access. These attribute names are case-sensitive. |
Implementation Details
Imported Services
None
Exported Services
| Registered Interfaces | Implementation Class | Properties Set |
|---|---|---|
org.apache.shiro.realm.Realm |
ddf.security.pdp.realm.SimpleAuthzRealm |
None |
Security PDP XACML Realm
The DDF Security PDP XACML realm exposes a realm that creates a XACML request with the incoming authorization information and sends the request to a XACML processing engine. The engine that is sent the request is not hardcoded and is retrieved at runtime by the OSGi service registry. This realm contains an embedded XACML processing engine that handles the requests and policies.
Installation
This bundle is not installed by default and can be added by installing the security-pdp-xacml feature.
Configuration
Settings
None
Implementation Details
Imported Services
None
Exported Services
| Registered Interfaces | Implementation Class | Properties Set |
|---|---|---|
org.apache.shiro.realm.Realm |
ddf.security.pep.realm.XACMLRealm |
None |
Anonymous Interceptor
The goal of the AnonymousInterceptor is to allow non-secure clients (SOAP requests without security headers) to access secure service endpoints.
All requests to secure endpoints must include, as part of the incoming message, a user’s credentials in the form of a SAML assertion or a reference to a SAML assertion. For REST/HTTP requests, either the assertion itself or the session reference (that contains the assertion) is included. For SOAP requests, the assertion is included in the SOAP header.
Rather than reject requests without user credentials, the anonymous interceptor detects the missing credentials and inserts an assertion that represents the "anonymous" user. The attributes included in this anonymous user assertion are configured by the administrator to represent any unknown user on the current network.
Installing
The AnonymousInterceptor is installed by default with DDF Security Application.
Configuring
Configuring via the Admin Console
-
Navigate to the DDF Admin Console → Configuration
-
Select “ Security STS Anonymous Claims Handler”
-
Click the + next to Attributes to add a new attribute
-
Add the following attribute:
“http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier=anonymous” -
Repeat the steps above to add another new attribute:
“http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier=anonymous” -
Click save
-
Now select “Security Simple AuthZ Realm”
-
Under role, add anonymous so Roles value is: “admin,anonymous”
-
Click save.
Once these configurations have been added, the AnonymousInterceptor is ready for use. Both secure and non-secure requests will be accepted by all secure DDF service endpoints.
Security IdP
The Security IdP application provides service provider handling that satisfies the SAML 2.0 Web SSO profile in order to support external IdPs (Identity Providers).
Components
| Bundle Name | Located in Feature | Description |
|---|---|---|
security-idp-sp |
security-idp |
IdP Service Provider |
security-idp-server |
security-idp |
IdP Server |
Installation
These bundles are not installed by default, they can be started by installing the security-idp feature.
Security IdP Service Provider
The IdP client that interacts with the specified Identity Provider.
Configuration
-
Navigate to Admin Console → DDF Security → Configuration → IdP Client
-
Populate IdP Metadata field with an HTTPS URL (https://), file URL (file:), or XML block to refer to desired metadata (e.g., https://localhost:8993/services/idp/login/metadata)
Security IdP Server
An internal Identity Provider solution.
Configuration
-
Navigate to Admin Console → DDF Security → Configuration → IdP Server
-
Click the + next to SP Metadata to add a new entry
-
Populate the new entry with an HTTPS URL (https://), file URL (file:), or XML block to refer to desired metadata (e.g., https://localhost:8993/services/saml/sso/metadata)
Related Configuration
-
Navigate to Admin Console → DDF Security → Configuration → Web Context Policy Manager
-
Under Authentication Types, set the IDP authentication type as necessary. Note that it should only be used on context paths that will be accessed by users via web browsers. For example:
-
/search=SAML|IDP
-
Limitations
The internal Identity Provider solution should be used in favor of any external solutions until the IdP Service Provider fully satisfies the SAML 2.0 Web SSO profile.
Overview
The DDF Spatial Application provides KML transformer and a KML network link endpoint that allows a user to generate a View-based KML Query Results Network Link.
This guide supports integration of this application with external frameworks.
Integrating DDF with CSW
Catalog Services for Web (CSW) is an Open Geospatial Consortium (OGC) standard.
CSW v2.0.2 Endpoint
The CSW endpoint provides an XML-RPC endpoint that a client accesses to search collections of descriptive information (metadata) about geospatial data and services. The CSW endpoint implements version 2.0.2 of the CSW specification (http://www.opengeospatial.org/standards/cat).
Using the CSW Endpoint
Once installed, the CSW endpoint is accessible from http://<DDF_HOST>:<DDF_PORT>/services/csw.
GetCapabilities Operation
The GetCapabilites operation is meant to describe the operations the catalog supports and the URLs used to access those operations.
The CSW endpoint supports both HTTP GET and HTTP POST requests for the GetCapabilties operation. The response to either request will always be a csw:Capabilities XML document. This XML document is defined by the CSW-Discovery XML Schema ( http://schemas.opengis.net/csw/2.0.2/CSW-discovery.xsd).
GetCapabilities HTTP GET

The HTTP GET form of GetCapabilities uses query parameters via the following URL:
http://<DDF_HOST>:<DDF_PORT>/services/csw?service=CSW&version=2.0.2&request=GetCapabilities
GetCapabilities HTTP POST
The HTTP POST form of GetCapabilities operates on the root CSW endpoint URL (http://<DDF_HOST>:<DDF_PORT>/services/csw) with an XML message body that is defined by the GetCapabilities element of the CSW-Discovery XML Schema ( http://schemas.opengis.net/csw/2.0.2/CSW-discovery.xsd).\
1
2
3
4
5
6
<?xml version="1.0" ?>
<csw:GetCapabilities
xmlns:csw="http://www.opengis.net/cat/csw/2.0.2"
service="CSW"
version="2.0.2" >
</csw:GetCapabilities>
GetCapabilities Response
The following is an example of an application/xml response to the GetCapabilities operation:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<csw:Capabilities
xmlns:csw="http://www.opengis.net/cat/csw/2.0.2"
xmlns:ogc="http://www.opengis.net/ogc"
xmlns:gml="http://www.opengis.net/gml"
xmlns:ns4="http://www.w3.org/1999/xlink"
xmlns:ns5="http://www.w3.org/2001/SMIL20/"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:ows="http://www.opengis.net/ows"
xmlns:dct="http://purl.org/dc/terms/"
xmlns:ns9="http://www.w3.org/2001/SMIL20/Language"
xmlns:ns10="http://www.w3.org/2001/XMLSchema-instance" version="2.0.2" ns10:schemaLocation="http://www.opengis.net/csw /ogc/csw/2.0.2/CSW-discovery.xsd">
<ows:ServiceIdentification>
<ows:Title>Catalog Service for the Web</ows:Title>
<ows:Abstract>DDF CSW Endpoint</ows:Abstract>
<ows:ServiceType>CSW</ows:ServiceType>
<ows:ServiceTypeVersion>2.0.2</ows:ServiceTypeVersion>
</ows:ServiceIdentification>
<ows:ServiceProvider>
<ows:ProviderName>DDF</ows:ProviderName>
<ows:ProviderSite/>
<ows:ServiceContact/>
</ows:ServiceProvider>
<ows:OperationsMetadata>
<ows:Operation name="GetCapabilities">
<ows:DCP>
<ows:HTTP>
<ows:Get ns4:href="https://localhost:8993/services/csw"/>
<ows:Post ns4:href="https://localhost:8993/services/csw">
<ows:Constraint name="PostEncoding">
<ows:Value>XML</ows:Value>
</ows:Constraint>
</ows:Post>
</ows:HTTP>
</ows:DCP>
<ows:Parameter name="sections">
<ows:Value>ServiceIdentification</ows:Value>
<ows:Value>ServiceProvider</ows:Value>
<ows:Value>OperationsMetadata</ows:Value>
<ows:Value>Filter_Capabilities</ows:Value>
</ows:Parameter>
</ows:Operation>
<ows:Operation name="DescribeRecord">
<ows:DCP>
<ows:HTTP>
<ows:Get ns4:href="https://localhost:8993/services/csw"/>
<ows:Post ns4:href="https://localhost:8993/services/csw">
<ows:Constraint name="PostEncoding">
<ows:Value>XML</ows:Value>
</ows:Constraint>
</ows:Post>
</ows:HTTP>
</ows:DCP>
<ows:Parameter name="typeName">
<ows:Value>csw:Record</ows:Value>
</ows:Parameter>
<ows:Parameter name="OutputFormat">
<ows:Value>application/xml</ows:Value>
<ows:Value>application/atom+xml</ows:Value>
<ows:Value>text/xml</ows:Value>
<ows:Value>application/json</ows:Value>
</ows:Parameter>
<ows:Parameter name="schemaLanguage">
<ows:Value>http://www.w3.org/XMLSchema</ows:Value>
<ows:Value>http://www.w3.org/XML/Schema</ows:Value>
<ows:Value>http://www.w3.org/2001/XMLSchema</ows:Value>
<ows:Value>http://www.w3.org/TR/xmlschema-1/</ows:Value>
</ows:Parameter>
</ows:Operation>
<ows:Operation name="GetRecords">
<ows:DCP>
<ows:HTTP>
<ows:Get ns4:href="https://localhost:8993/services/csw"/>
<ows:Post ns4:href="https://localhost:8993/services/csw">
<ows:Constraint name="PostEncoding">
<ows:Value>XML</ows:Value>
</ows:Constraint>
</ows:Post>
</ows:HTTP>
</ows:DCP>
<ows:Parameter name="ResultType">
<ows:Value>hits</ows:Value>
<ows:Value>results</ows:Value>
<ows:Value>validate</ows:Value>
</ows:Parameter>
<ows:Parameter name="OutputFormat">
<ows:Value>application/xml</ows:Value>
<ows:Value>application/atom+xml</ows:Value>
<ows:Value>text/xml</ows:Value>
<ows:Value>application/json</ows:Value>
</ows:Parameter>
<ows:Parameter name="OutputSchema">
<ows:Value>urn:catalog:metacard</ows:Value>
<ows:Value>http://www.opengis.net/cat/csw/2.0.2</ows:Value>
</ows:Parameter>
<ows:Parameter name="typeNames">
<ows:Value>csw:Record</ows:Value>
</ows:Parameter>
<ows:Parameter name="ConstraintLanguage">
<ows:Value>Filter</ows:Value>
<ows:Value>CQL_Text</ows:Value>
</ows:Parameter>
<ows:Constraint name="FederatedCatalogs">
<ows:Value>Source1</ows:Value>
<ows:Value>Source2</ows:Value>
</ows:Constraint>
</ows:Operation>
<ows:Operation name="GetRecordById">
<ows:DCP>
<ows:HTTP>
<ows:Get ns4:href="https://localhost:8993/services/csw"/>
<ows:Post ns4:href="https://localhost:8993/services/csw">
<ows:Constraint name="PostEncoding">
<ows:Value>XML</ows:Value>
</ows:Constraint>
</ows:Post>
</ows:HTTP>
</ows:DCP>
<ows:Parameter name="OutputSchema">
<ows:Value>urn:catalog:metacard</ows:Value>
<ows:Value>http://www.opengis.net/cat/csw/2.0.2</ows:Value>
</ows:Parameter>
<ows:Parameter name="OutputFormat">
<ows:Value>application/xml</ows:Value>
<ows:Value>application/atom+xml</ows:Value>
<ows:Value>text/xml</ows:Value>
<ows:Value>application/json</ows:Value>
</ows:Parameter>
<ows:Parameter name="ResultType">
<ows:Value>hits</ows:Value>
<ows:Value>results</ows:Value>
<ows:Value>validate</ows:Value>
</ows:Parameter>
<ows:Parameter name="ElementSetName">
<ows:Value>brief</ows:Value>
<ows:Value>summary</ows:Value>
<ows:Value>full</ows:Value>
</ows:Parameter>
</ows:Operation>
<ows:Parameter name="service">
<ows:Value>CSW</ows:Value>
</ows:Parameter>
<ows:Parameter name="version">
<ows:Value>2.0.2</ows:Value>
</ows:Parameter>
</ows:OperationsMetadata>
<ogc:Filter_Capabilities>
<ogc:Spatial_Capabilities>
<ogc:GeometryOperands>
<ogc:GeometryOperand>gml:Point</ogc:GeometryOperand>
<ogc:GeometryOperand>gml:LineString</ogc:GeometryOperand>
<ogc:GeometryOperand>gml:Polygon</ogc:GeometryOperand>
</ogc:GeometryOperands>
<ogc:SpatialOperators>
<ogc:SpatialOperator name="BBOX"/>
<ogc:SpatialOperator name="Beyond"/>
<ogc:SpatialOperator name="Contains"/>
<ogc:SpatialOperator name="Crosses"/>
<ogc:SpatialOperator name="Disjoint"/>
<ogc:SpatialOperator name="DWithin"/>
<ogc:SpatialOperator name="Intersects"/>
<ogc:SpatialOperator name="Overlaps"/>
<ogc:SpatialOperator name="Touches"/>
<ogc:SpatialOperator name="Within"/>
</ogc:SpatialOperators>
</ogc:Spatial_Capabilities>
<ogc:Scalar_Capabilities>
<ogc:LogicalOperators/>
<ogc:ComparisonOperators>
<ogc:ComparisonOperator>Between</ogc:ComparisonOperator>
<ogc:ComparisonOperator>NullCheck</ogc:ComparisonOperator>
<ogc:ComparisonOperator>Like</ogc:ComparisonOperator>
<ogc:ComparisonOperator>EqualTo</ogc:ComparisonOperator>
<ogc:ComparisonOperator>GreaterThan</ogc:ComparisonOperator>
<ogc:ComparisonOperator>GreaterThanEqualTo</ogc:ComparisonOperator>
<ogc:ComparisonOperator>LessThan</ogc:ComparisonOperator>
<ogc:ComparisonOperator>LessThanEqualTo</ogc:ComparisonOperator>
<ogc:ComparisonOperator>EqualTo</ogc:ComparisonOperator>
<ogc:ComparisonOperator>NotEqualTo</ogc:ComparisonOperator>
</ogc:ComparisonOperators>
</ogc:Scalar_Capabilities>
<ogc:Id_Capabilities>
<ogc:EID/>
</ogc:Id_Capabilities>
</ogc:Filter_Capabilities>
</csw:Capabilities>
DescribeRecord Operation
The describeRecord operation retrieves the type definition used by metadata of one or more registered resource types. There are two request types one for GET and one for POST. Each request has the following common data parameters:
- Namespace
-
In
POSToperations, namespaces are defined in the xml. InGEToperations, namespaces are defined in a comma separated list of the form:xmlns([prefix=]namespace-url)(,xmlns([prefix=]namespace-url))* - Service
-
The service being used, in this case it is fixed at CSW.
- Version
-
The version of the service being used (2.0.2).
- OutputFormat
-
The requester wants the response to be in this intended output. Currently, only one format is supported (application/xml). If this parameter is supplied, it is validated against the known type. If this parameter is not supported, it passes through and returns the XML response upon success. SchemaLanguage: The schema language from the request. This is validated against the known list of schema languages supported (refer to http://www.w3.org/XML/Schema).
DescribeRecord HTTP GET
The HTTP GET request differs from the POST request in that the typeName is a comma-separated list of namespace prefix qualified types as strings (e.g., csw:Record,xyz:MyType). These prefixes are then matched against the prefix qualified namespaces in the request. This is converted to a list of QName(s). In this way, it behaves exactly as the post request that uses a list of QName(s) in the first place.
http://<DDF_HOST>:<DDF_PORT>/services/csw?service=CSW&version=2.0.2&request=DescribeRecord&NAMESPACE=xmlns(http://www.opengis.net/cat/csw/2.0.2)&outputFormat=application/xml&schemaLanguage=http://www.w3.org/XML/Schema
DescribeRecord HTTP POST
The HTTP POST request DescribeRecordType has the typeName as a List of QName(s). The QNames are matched against the namespaces by prefix, if prefixes exist.  .DescribeRecord XML Request
1
2
3
4
5
6
7
8
<?xml version="1.0" ?>
<DescribeRecord
version="2.0.2"
service="CSW"
outputFormat="application/xml"
schemaLanguage="http://www.w3.org/XML/Schema"
xmlns="http://www.opengis.net/cat/csw/2.0.2">
</DescribeRecord>
DescribeRecord Response
The following is an example of an application/xml response to the DescribeRecord operation.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
<DescribeRecordResponse xsi:schemaLocation="http://www.opengis.net/csw/ogc/csw/2.0.2/CSW-discovery.xsd" xmlns="http://www.opengis.net/cat/csw/2.0.2" xmlns:ns2="http://www.opengis.net/ogc" xmlns:ns3="http://www.opengis.net/gml" xmlns:ns4="http://www.w3.org/1999/xlink" xmlns:ns5="http://www.opengis.net/ows" xmlns:ns6="http://purl.org/dc/elements/1.1/" xmlns:ns7="http://purl.org/dc/terms/" xmlns:ns8="http://www.w3.org/2001/SMIL20/"
xmlns:ns9="http://www.w3.org/2001/SMIL20/Language" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<SchemaComponent targetNamespace="http://www.opengis.net/cat/csw/2.0.2" schemaLanguage="http://www.w3.org/XML/Schema">
<xsd:schema elementFormDefault="qualified" id="csw-record" targetNamespace="http://www.opengis.net/cat/csw/2.0.2" version="2.0.2" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:csw="http://www.opengis.net/cat/csw/2.0.2" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dct="http://purl.org/dc/terms/" xmlns:ows="http://www.opengis.net/ows">
<xsd:annotation>
<xsd:appinfo>
<dc:identifier>http://schemas.opengis.net/csw/2.0.2/record.xsd</dc:identifier>
</xsd:appinfo>
<xsd:documentation xml:lang="en">This schema defines the basic record types that must be supported by all CSW implementations. These correspond to full, summary, and brief views based on DCMI metadata terms.</xsd:documentation>
</xsd:annotation>
<xsd:import namespace="http://purl.org/dc/terms/" schemaLocation="rec-dcterms.xsd"/>
<xsd:import namespace="http://purl.org/dc/elements/1.1/" schemaLocation="rec-dcmes.xsd"/>
<xsd:import namespace="http://www.opengis.net/ows" schemaLocation="../../ows/1.0.0/owsAll.xsd"/>
<xsd:element abstract="true" id="AbstractRecord" name="AbstractRecord" type="csw:AbstractRecordType"/>
<xsd:complexType abstract="true" id="AbstractRecordType" name="AbstractRecordType"/>
<xsd:element name="DCMIRecord" substitutionGroup="csw:AbstractRecord" type="csw:DCMIRecordType"/>
<xsd:complexType name="DCMIRecordType">
<xsd:annotation>
<xsd:documentation xml:lang="en">This type encapsulates all of the standard DCMI metadata terms, including the Dublin Core refinements; these terms may be mapped to the profile-specific information model.
</xsd:documentation>
</xsd:annotation>
<xsd:complexContent>
<xsd:extension base="csw:AbstractRecordType">
<xsd:sequence>
<xsd:group ref="dct:DCMI-terms"/>
</xsd:sequence>
</xsd:extension>
</xsd:complexContent>
</xsd:complexType>
<xsd:element name="BriefRecord" substitutionGroup="csw:AbstractRecord" type="csw:BriefRecordType"/>
<xsd:complexType final="#all" name="BriefRecordType">
<xsd:annotation>
<xsd:documentation xml:lang="en">This type defines a brief representation of the common record format. It extends AbstractRecordType to include only the dc:identifier and dc:type properties.
</xsd:documentation>
</xsd:annotation>
<xsd:complexContent>
<xsd:extension base="csw:AbstractRecordType">
<xsd:sequence>
<xsd:element maxOccurs="unbounded" minOccurs="1" ref="dc:identifier"/>
<xsd:element maxOccurs="unbounded" minOccurs="1" ref="dc:title"/>
<xsd:element minOccurs="0" ref="dc:type"/>
<xsd:element maxOccurs="unbounded" minOccurs="0" ref="ows:BoundingBox"/>
</xsd:sequence>
</xsd:extension>
</xsd:complexContent>
</xsd:complexType>
<xsd:element name="SummaryRecord" substitutionGroup="csw:AbstractRecord" type="csw:SummaryRecordType"/>
<xsd:complexType final="#all" name="SummaryRecordType">
<xsd:annotation>
<xsd:documentation xml:lang="en">This type defines a summary representation of the common record format. It extends AbstractRecordType to include the core properties.
</xsd:documentation>
</xsd:annotation>
<xsd:complexContent>
<xsd:extension base="csw:AbstractRecordType">
<xsd:sequence>
<xsd:element maxOccurs="unbounded" minOccurs="1" ref="dc:identifier"/>
<xsd:element maxOccurs="unbounded" minOccurs="1" ref="dc:title"/>
<xsd:element minOccurs="0" ref="dc:type"/>
<xsd:element maxOccurs="unbounded" minOccurs="0" ref="dc:subject"/>
<xsd:element maxOccurs="unbounded" minOccurs="0" ref="dc:format"/>
<xsd:element maxOccurs="unbounded" minOccurs="0" ref="dc:relation"/>
<xsd:element maxOccurs="unbounded" minOccurs="0" ref="dct:modified"/>
<xsd:element maxOccurs="unbounded" minOccurs="0" ref="dct:abstract"/>
<xsd:element maxOccurs="unbounded" minOccurs="0" ref="dct:spatial"/>
<xsd:element maxOccurs="unbounded" minOccurs="0" ref="ows:BoundingBox"/>
</xsd:sequence>
</xsd:extension>
</xsd:complexContent>
</xsd:complexType>
<xsd:element name="Record" substitutionGroup="csw:AbstractRecord" type="csw:RecordType"/>
<xsd:complexType final="#all" name="RecordType">
<xsd:annotation>
<xsd:documentation xml:lang="en">This type extends DCMIRecordType to add ows:BoundingBox; it may be used to specify a spatial envelope for the catalogued resource.
</xsd:documentation>
</xsd:annotation>
<xsd:complexContent>
<xsd:extension base="csw:DCMIRecordType">
<xsd:sequence>
<xsd:element maxOccurs="unbounded" minOccurs="0" name="AnyText" type="csw:EmptyType"/>
<xsd:element maxOccurs="unbounded" minOccurs="0" ref="ows:BoundingBox"/>
</xsd:sequence>
</xsd:extension>
 </xsd:complexContent>
</xsd:complexType>
<xsd:complexType name="EmptyType"/>
</xsd:schema>
</SchemaComponent>
</DescribeRecordResponse>
GetRecords Operation
The GetRecords operation is the principal means of searching the catalog. The matching entries may be included with the response. The client may assign a requestId (absolute URI). A distributed search is performed if the DistributedSearch element is present and the catalog is a member of a federation. Profiles may allow alternative query expressions. There are two types of request types: one for GET and one for POST. Each request has the following common data parameters:
- Namespace
-
In POST operations, namespaces are defined in the XML. In GET operations, namespaces are defined in a comma-separated list of the form xmlns([prefix=]namespace-url)(,xmlns([pref::=]namespace-url))*.
- Service
-
The service being used, in this case it is fixed at CSW.
- Version
-
The version of the service being used (2.0.2).
- OutputFormat
-
The requester wants the response to be in this intended output. Currently, only one format is supported (application/xml). If this parameter is supplied, it is validated against the known type. If this parameter is not supported, it passes through and returns the XML response upon success.
- OutputSchema
-
This is the schema language from the request. This is validated against the known list of schema languages supported (refer to http://www.w3.org/XML/Schema).
- ElementSetName
-
CodeList with allowed values of “brief”, “summary”, or “full”. The default value is "summary". The predefined set names of “brief”, “summary”, and “full” represent different levels of detail for the source record. "Brief" represents the least amount of detail, and "full" represents all the metadata record elements.
GetRecords HTTP GET
The HTTP GET request differs from the POST request in that it has the "typeNames" as a comma-separated list of namespace prefix qualified types as strings. For example "csw:Record,xyz:MyType". These prefixes are then matched against the prefix qualified namespaces in the request. This is converted to a list QName(s). In this way it behaves exactly as the post request that uses a list of QName(s) in the first place.
http://<DDF_HOST>:<DDF_PORT>/services/csw?service=CSW&version=2.0.2&request=GetRecords&o utputFormat=application/xml&outputSchema=http://www.opengis.net/cat/csw/2.0.2&NAMESPACE= xmlns(csw=http://www.opengis.net/cat/csw/2.0.2)&resultType=results&typeNames=csw:Record& ElementSetName=brief&ConstraintLanguage=CQL_TEXT&constraint=AnyText Like '%25'
GetRecords HTTP POST
The HTTP POST request GetRecords has the "typeNames" as a List of QName(s). The QNames are matched against the namespaces by prefix, if prefixes exist.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
<?xml version="1.0" ?>
<GetRecords xmlns="http://www.opengis.net/cat/csw/2.0.2"
xmlns:ogc="http://www.opengis.net/ogc"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
service="CSW"
version="2.0.2"
maxRecords="4"
startPosition="1"
resultType="results"
outputFormat="application/xml"
outputSchema="http://www.opengis.net/cat/csw/2.0.2"
xsi:schemaLocation="http://www.opengis.net/cat/csw/2.0.2 ../../../csw/2.0.2/CSW-discovery.xsd">
<Query typeNames="Record">
<ElementSetName>summary</ElementSetName>
<Constraint version="1.1.0">
<ogc:Filter>
<ogc:PropertyIsLike wildCard="%" singleChar="_" escapeChar="\">
<ogc:PropertyName>AnyText</ogc:PropertyName>
<ogc:Literal>%</ogc:Literal>
</ogc:PropertyIsLike>
</ogc:Filter>
</Constraint>
</Query>
</GetRecords>
GetRecords Specific Source
It is possible to query a Specific Source by specifying a query for that source-id. The valid source-id’s will be listed in the "FederatedCatalogs" section of the GetCapabilities Response. The example below shows how to query for a specifc source.
NOTE: The DistributedSearch element must be specific with a hopCount greater than 1 to identify the is a federated query, otherwise the source-id’s will be ignored.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
<?xml version="1.0" ?>
<csw:GetRecords resultType="results"
outputFormat="application/xml"
outputSchema="urn:catalog:metacard"
startPosition="1"
maxRecords="10"
service="CSW"
version="2.0.2"
xmlns:ns2="http://www.opengis.net/ogc" xmlns:csw="http://www.opengis.net/cat/csw/2.0.2" xmlns:ns4="http://www.w3.org/1999/xlink" xmlns:ns3="http://www.opengis.net/gml" xmlns:ns9="http://www.w3.org/2001/SMIL20/Language" xmlns:ns5="http://www.opengis.net/ows" xmlns:ns6="http://purl.org/dc/elements/1.1/" xmlns:ns7="http://purl.org/dc/terms/" xmlns:ns8="http://www.w3.org/2001/SMIL20/">
<csw:DistributedSearch hopCount="2" />
<ns10:Query typeNames="csw:Record" xmlns="" xmlns:ns10="http://www.opengis.net/cat/csw/2.0.2">
<ns10:ElementSetName>full</ns10:ElementSetName>
<ns10:Constraint version="1.1.0">
<ns2:Filter>
<ns2:And>
<ns2:PropertyIsEqualToLike wildCard="*" singleChar="#" escapeChar="!">
<ns2:PropertyName>source-id</ns2:PropertyName>
<ns2:Literal>Source1</ns2:Literal>
</ns2:PropertyIsLike>
<ns2:PropertyIsLike wildCard="*" singleChar="#" escapeChar="!">
<ns2:PropertyName>title</ns2:PropertyName>
<ns2:Literal>*</ns2:Literal>
</ns2:PropertyIsLike>
</ns2:And>
</ns2:Filter>
</ns10:Constraint>
</ns10:Query>
</csw:GetRecords>
GetRecords Response
The following is an example of an application/xml response to the GetRecords operation.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
<csw:GetRecordsResponse version="2.0.2" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dct="http://purl.org/dc/terms/" xmlns:ows="http://www.opengis.net/ows" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:csw="http://www.opengis.net/cat/csw/2.0.2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<csw:SearchStatus timestamp="2014-02-19T15:33:44.602-05:00"/>
<csw:SearchResults numberOfRecordsMatched="41" numberOfRecordsReturned="4" nextRecord="5" recordSchema="http://www.opengis.net/cat/csw/2.0.2" elementSet="summary">
<csw:SummaryRecord>
<dc:identifier>182fb33103414e5cbb06f8693b526239</dc:identifier>
<dc:title>Product10</dc:title>
<dc:type>pdf</dc:type>
<dct:modified>2014-02-19T15:22:51.563-05:00</dct:modified>
<ows:BoundingBox crs="urn:x-ogc:def:crs:EPSG:6.11:4326">
<ows:LowerCorner>20.0 10.0</ows:LowerCorner>
<ows:UpperCorner>20.0 10.0</ows:UpperCorner>
</ows:BoundingBox>
</csw:SummaryRecord>
<csw:SummaryRecord>
<dc:identifier>c607440db9b0407e92000d9260d35444</dc:identifier>
<dc:title>Product03</dc:title>
<dc:type>pdf</dc:type>
<dct:modified>2014-02-19T15:22:51.563-05:00</dct:modified>
<ows:BoundingBox crs="urn:x-ogc:def:crs:EPSG:6.11:4326">
<ows:LowerCorner>6.0 3.0</ows:LowerCorner>
<ows:UpperCorner>6.0 3.0</ows:UpperCorner>
</ows:BoundingBox>
</csw:SummaryRecord>
<csw:SummaryRecord>
<dc:identifier>034cc757abd645f0abe6acaccfe194de</dc:identifier>
<dc:title>Product03</dc:title>
<dc:type>pdf</dc:type>
<dct:modified>2014-02-19T15:22:51.563-05:00</dct:modified>
<ows:BoundingBox crs="urn:x-ogc:def:crs:EPSG:6.11:4326">
<ows:LowerCorner>6.0 3.0</ows:LowerCorner>
<ows:UpperCorner>6.0 3.0</ows:UpperCorner>
</ows:BoundingBox>
</csw:SummaryRecord>
<csw:SummaryRecord>
<dc:identifier>5d6e987bd6084bd4919d06b63b77a007</dc:identifier>
<dc:title>Product01</dc:title>
<dc:type>pdf</dc:type>
<dct:modified>2014-02-19T15:22:51.563-05:00</dct:modified>
<ows:BoundingBox crs="urn:x-ogc:def:crs:EPSG:6.11:4326">
<ows:LowerCorner>2.0 1.0</ows:LowerCorner>
<ows:UpperCorner>2.0 1.0</ows:UpperCorner>
</ows:BoundingBox>
</csw:SummaryRecord>
</csw:SearchResults>
</csw:GetRecordsResponse>
GetRecordById Operation
The GetRecords operation request retrieves the default representation of catalog records using their identifier. This operation presumes that a previous query has been performed in order to obtain the identifiers that may be used with this operation. For example, records returned by a GetRecords operation may contain references to other records in the catalog that may be retrieved using the GetRecordById operation. This operation is also a subset of the GetRecords operation and is included as a convenient short form for retrieving and linking to records in a catalog. There are two request types: one for GET and one for POST. Each request has the following common data parameters:
- Namespace
-
In POST operations, namespaces are defined in the XML. In GET operations namespaces are defined in a comma separated list of the form: xmlns([prefix=]namespace-url)(,xmlns([prefix=]namespace-url))*
- Service
-
The service being used, in this case it is fixed at "CSW"
- Version
-
The version of the service being used (2.0.2).
- OutputFormat
-
The requester wants the response to be in this intended output. Currently, only one format is supported (application/xml). If this parameter is supplied, it is validated against the known type. If this parameter is not supported, it passes through and returns the XML response upon success.
- OutputSchema
-
This is the schema language from the request. This is validated against the known list of schema languages supported (refer to http://www.w3.org/XML/Schema).
- ElementSetName
-
CodeList with allowed values of “brief”, “summary”, or “full”. The default value is "summary". The predefined set names of “brief”, “summary”, and “full” represent different levels of detail for the source record. "Brief" represents the least amount of detail, and "full" represents all the metadata record elements.
- Id
-
The Id parameter is a comma-separated list of record identifiers for the records that CSW returns to the client. In the XML encoding, one or more <Id> elements may be used to specify the record identifier to be retrieved.
GetRecordById HTTP GET
The following is an example of a HTTP GET request:
http://<DDF_HOST>:<DDF_PORT>/services/csw?service=CSW&version=2.0.2&request=GetRecordById&NAMESPACE=xmlns="http://www.opengis.net/cat/csw/2.0.2"&ElementSetName=full&outputFormat=application/xml&outputSchema=http://www.opengis.net/cat/csw/2.0.2&id=fd7ff1535dfe47db8793b550d4170424,ba908634c0eb439b84b5d9c42af1f871
GetRecordById HTTP POST
The following is an example of a HTTP POST request:
1
2
3
4
5
6
7
8
9
10
11
12
13
<GetRecordById xmlns="http://www.opengis.net/cat/csw/2.0.2"
xmlns:ogc="http://www.opengis.net/ogc"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
service="CSW"
version="2.0.2"
outputFormat="application/xml"
outputSchema="http://www.opengis.net/cat/csw/2.0.2"
xsi:schemaLocation="http://www.opengis.net/cat/csw/2.0.2
../../../csw/2.0.2/CSW-discovery.xsd">
<ElementSetName>full</ElementSetName>
<Id>182fb33103414e5cbb06f8693b526239</Id>
<Id>c607440db9b0407e92000d9260d35444</Id>
</GetRecordById>
GetRecordByIdResponse
The following is an example of an application/xml response to the GetRecordById operation:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
<csw:GetRecordByIdResponse xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:dct="http://purl.org/dc/terms/" xmlns:ows="http://www.opengis.net/ows"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:csw="http://www.opengis.net/cat/csw/2.0.2"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<csw:Record>
<dc:identifier>182fb33103414e5cbb06f8693b526239</dc:identifier>
<dct:bibliographicCitation>182fb33103414e5cbb06f8693b526239</dct:bibliographicCitation>
<dc:title>Product10</dc:title>
<dct:alternative>Product10</dct:alternative>
<dc:type>pdf</dc:type>
<dc:date>2014-02-19T15:22:51.563-05:00</dc:date>
<dct:modified>2014-02-19T15:22:51.563-05:00</dct:modified>
<dct:created>2014-02-19T15:22:51.563-05:00</dct:created>
<dct:dateAccepted>2014-02-19T15:22:51.563-05:00</dct:dateAccepted>
<dct:dateCopyrighted>2014-02-19T15:22:51.563-05:00</dct:dateCopyrighted>
<dct:dateSubmitted>2014-02-19T15:22:51.563-05:00</dct:dateSubmitted>
<dct:issued>2014-02-19T15:22:51.563-05:00</dct:issued>
<dc:source>ddf.distribution</dc:source>
<ows:BoundingBox crs="urn:x-ogc:def:crs:EPSG:6.11:4326">
<ows:LowerCorner>20.0 10.0</ows:LowerCorner>
<ows:UpperCorner>20.0 10.0</ows:UpperCorner>
</ows:BoundingBox>
</csw:Record>
<csw:Record>
<dc:identifier>c607440db9b0407e92000d9260d35444</dc:identifier>
<dct:bibliographicCitation>c607440db9b0407e92000d9260d35444</dct:bibliographicCitation>
<dc:title>Product03</dc:title>
<dct:alternative>Product03</dct:alternative>
<dc:type>pdf</dc:type>
<dc:date>2014-02-19T15:22:51.563-05:00</dc:date>
<dct:modified>2014-02-19T15:22:51.563-05:00</dct:modified>
<dct:created>2014-02-19T15:22:51.563-05:00</dct:created>
<dct:dateAccepted>2014-02-19T15:22:51.563-05:00</dct:dateAccepted>
<dct:dateCopyrighted>2014-02-19T15:22:51.563-05:00</dct:dateCopyrighted>
<dct:dateSubmitted>2014-02-19T15:22:51.563-05:00</dct:dateSubmitted>
<dct:issued>2014-02-19T15:22:51.563-05:00</dct:issued>
<dc:source>ddf.distribution</dc:source>
<ows:BoundingBox crs="urn:x-ogc:def:crs:EPSG:6.11:4326">
<ows:LowerCorner>6.0 3.0</ows:LowerCorner>
<ows:UpperCorner>6.0 3.0</ows:UpperCorner>
</ows:BoundingBox>
</csw:Record>
</csw:GetRecordByIdResponse>
| CSW Record Field | Metacard Field | Brief Record | Summary Record | Record |
|---|---|---|---|---|
dc:title |
title |
1-n |
1-n |
0-n |
dc:creator |
0-n |
|||
dc:subject |
0-n |
0-n |
||
dc:description |
0-n |
|||
dc:publisher |
0-n  |
|||
dc:contributor |
0-n |
|||
dc:date |
modified |
0-n |
||
dc:type |
metadata-content-type |
0-1 |
0-1 |
0-n |
dc:format |
0-n |
0-n |
||
dc:identifier |
id |
1-n |
1-n |
0-n |
dc:source |
source-id |
0-n |
||
dc:language |
0-n |
|||
dc:relation |
0-n |
0-n |
||
dc:coverage |
0-n |
|||
dc:rights |
0-n |
|||
dct:abstract |
0-n |
0-n |
||
dct:accessRights |
0-n |
|||
dct:alternative |
title |
0-n |
||
dct:audience |
0-n |
|||
dct:available |
0-n |
|||
dct:bibliographicCitation |
id |
0-n |
||
dct:conformsTo |
0-n |
|||
dct:created |
created |
0-n |
||
dct:dateAccepted |
effective |
0-n |
||
dct:Copyrighted |
effective |
0-n |
||
dct:dateSubmitted |
modified |
0-n  |
||
dct:educationLevel |
0-n  |
|||
dct:extent |
0-n |
|||
dct:hasFormat |
0-n |
|||
dct:hasPart |
0-n |
|||
dct:hasVersion |
0-n  |
|||
dct:isFormatOf |
0-n  |
|||
dct:isPartOf |
0-n |
|||
dct:isReferencedBy |
0-n |
|||
dct:isReplacedBy |
0-n |
|||
dct:isRequiredBy |
0-n  |
|||
dct:issued |
modified |
0-n  |
||
dct:isVersionOf |
0-n |
|||
dct:license |
0-n |
|||
dct:mediator |
0-n |
|||
dct:medium |
0-n |
|||
dct:modified |
modified |
0-n |
0-n |
|
dct:provenance |
0-n |
|||
dct:references |
0-n |
|||
dct:replaces |
0-n |
|||
dct:requires |
0-n |
|||
dct:rightsHolder |
0-n |
|||
dct:spatial |
location |
0-n |
0-n  |
|
dct:tableOfContents |
0-n |
|||
dct:temporal |
effective + " - " + expiration |
0-n |
||
dct:valid |
expiration |
0-n  |
||
ows:BoundingBox |
0-n |
0-n |
0-n |
Transaction Operation
Transactions define the operations for creating, modifying, and deleting catalog records. The supported sub-operations for the Transaction operation are Insert, Update, and Delete.
The CSW Transactions endpoint only supports HTTP POST requests since there are no KVP operations.
Transaction Insert Sub-Operation HTTP POST
The Insert sub-operation is a method for one or more records to be inserted into the catalog. The schema of the record needs to conform to the schema of the information model that the catalog supports as described using the DescribeRecord operation.
The following example shows a request for a record to be inserted.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<csw:Transaction
service="CSW"
version="2.0.2"
verboseResponse="true"
xmlns:csw="http://www.opengis.net/cat/csw/2.0.2">
<csw:Insert typeName="csw:Record">
<csw:Record
xmlns:ows="http://www.opengis.net/ows"
xmlns:csw="http://www.opengis.net/cat/csw/2.0.2"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:dct="http://purl.org/dc/terms/"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<dc:identifier></dc:identifier>
<dc:title>Aliquam fermentum purus quis arcu</dc:title>
<dc:type>http://purl.org/dc/dcmitype/Text</dc:type>
<dc:subject>Hydrography--Dictionaries</dc:subject>
<dc:format>application/pdf</dc:format>
<dc:date>2006-05-12</dc:date>
<dct:abstract>Vestibulum quis ipsum sit amet metus imperdiet vehicula. Nulla scelerisque cursus mi.</dct:abstract>
<ows:BoundingBox crs="urn:x-ogc:def:crs:EPSG:6.11:4326">
<ows:LowerCorner>44.792 -6.171</ows:LowerCorner>
<ows:UpperCorner>51.126 -2.228</ows:UpperCorner>
</ows:BoundingBox>
</csw:Record>
</csw:Insert>
</csw:Transaction>
Transaction Insert Response
The following is an example of an application/xml response to the Transaction Insert sub-operation:
Note that you will only receive the InsertResult element if you specify verboseResponse="true".
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<csw:TransactionResponse xmlns:ogc="http://www.opengis.net/ogc"
xmlns:gml="http://www.opengis.net/gml"
xmlns:ns3="http://www.w3.org/1999/xlink"
xmlns:csw="http://www.opengis.net/cat/csw/2.0.2"
xmlns:ns5="http://www.w3.org/2001/SMIL20/"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:ows="http://www.opengis.net/ows"
xmlns:dct="http://purl.org/dc/terms/"
xmlns:ns9="http://www.w3.org/2001/SMIL20/Language"
xmlns:ns10="http://www.w3.org/2001/XMLSchema-instance"
version="2.0.2"
ns10:schemaLocation="http://www.opengis.net/csw /ogc/csw/2.0.2/CSW-publication.xsd">
<csw:TransactionSummary>
<csw:totalInserted>1</csw:totalInserted>
<csw:totalUpdated>0</csw:totalUpdated>
<csw:totalDeleted>0</csw:totalDeleted>
</csw:TransactionSummary>
<csw:InsertResult>
<csw:BriefRecord>
<dc:identifier>2dbcfba3f3e24e3e8f68c50f5a98a4d1</dc:identifier>
<dc:title>Aliquam fermentum purus quis arcu</dc:title>
<dc:type>http://purl.org/dc/dcmitype/Text</dc:type>
<ows:BoundingBox crs="EPSG:4326">
<ows:LowerCorner>-6.171 44.792</ows:LowerCorner>
<ows:UpperCorner>-2.228 51.126</ows:UpperCorner>
</ows:BoundingBox>
</csw:BriefRecord>
</csw:InsertResult>
</csw:TransactionResponse>
Transaction Update Sub-Operation HTTP POST
The Update sub-operation is a method to specify values used to change existing information in the catalog. If individual record property values are specified in the Update element, using the RecordProperty element, then those individual property values of a catalog record are replaced. The RecordProperty contains a Name and Value element. The Name element is used to specify the name of the record property to be updated. The Value element contains the value that will be used to update the record in the catalog. The values in the Update will completely replace those that are already in the record. A property is removed only if the RecordProperty contains a Name but not a Value.
The number of records affected by an Update operation is determined by the contents of the Constraint element, which contains a filter for limiting the update to a specific record or group of records.
The following example shows how the newly inserted record could be updated to modify the date field. If your update request contains a <csw:Record> rather than a set of <RecordProperty> elements plus a <Constraint> , the existing record with the same ID will be replaced with the new record.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<csw:Transaction
service="CSW"
version="2.0.2"
xmlns:csw="http://www.opengis.net/cat/csw/2.0.2">
<csw:Update>
<csw:Record
xmlns:ows="http://www.opengis.net/ows"
xmlns:csw="http://www.opengis.net/cat/csw/2.0.2"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:dct="http://purl.org/dc/terms/"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<dc:identifier>2dbcfba3f3e24e3e8f68c50f5a98a4d1</dc:identifier>
<dc:title>Aliquam fermentum purus quis arcu</dc:title>
<dc:type>http://purl.org/dc/dcmitype/Text</dc:type>
<dc:subject>Hydrography--Dictionaries</dc:subject>
<dc:format>application/pdf</dc:format>
<dc:date>2008-08-10</dc:date>
<dct:abstract>Vestibulum quis ipsum sit amet metus imperdiet vehicula. Nulla scelerisque cursus mi.</dct:abstract>
<ows:BoundingBox crs="urn:x-ogc:def:crs:EPSG:6.11:4326">
<ows:LowerCorner>44.792 -6.171</ows:LowerCorner>
<ows:UpperCorner>51.126 -2.228</ows:UpperCorner>
</ows:BoundingBox>
</csw:Record>
</csw:Update>
</csw:Transaction>
The following example shows how the newly inserted record could be updated to modify the date field while using a filter constraint with title equal to Aliquam fermentum purus quis arcu.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<csw:Transaction
service="CSW"
version="2.0.2"
xmlns:csw="http://www.opengis.net/cat/csw/2.0.2">
<csw:Update>
<csw:RecordProperty>
<csw:Name>title</csw:Name>
<csw:Value>Updated Title</csw:Value>
</csw:RecordProperty>
<csw:RecordProperty>
<csw:Name>date</csw:Name>
<csw:Value>2015-08-25</csw:Value>
</csw:RecordProperty>
<csw:RecordProperty>
<csw:Name>format</csw:Name>
<csw:Value></csw:Value>
</csw:RecordProperty>
<csw:Constraint version="2.0.0">
<ogc:Filter>
<ogc:PropertyIsEqualTo>
<ogc:PropertyName>title</ogc:PropertyName>
<ogc:Literal>Aliquam fermentum purus quis arcu</ogc:Literal>
</ogc:PropertyIsEqualTo>
</ogc:Filter>
</csw:Constraint>
</csw:Update>
</csw:Transaction>
The following example shows how the newly inserted record could be updated to modify the date field while using a CQL filter constraint with title equal to Aliquam fermentum purus quis arcu.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<csw:Transaction
service="CSW"
version="2.0.2"
xmlns:csw="http://www.opengis.net/cat/csw/2.0.2">
<csw:Update>
<csw:RecordProperty>
<csw:Name>title</csw:Name>
<csw:Value>Updated Title</csw:Value>
</csw:RecordProperty>
<csw:RecordProperty>
<csw:Name>date</csw:Name>
<csw:Value>2015-08-25</csw:Value>
</csw:RecordProperty>
<csw:RecordProperty>
<csw:Name>format</csw:Name>
<csw:Value></csw:Value>
</csw:RecordProperty>
<csw:Constraint version="2.0.0">
<ogc:CqlText>
title = 'Aliquam fermentum purus quis arcu'
</ogc:CqlText>
</csw:Constraint>
</csw:Update>
</csw:Transaction>
Transaction Update Response
The following is an example of an application/xml response to the Transaction Update sub-operation:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<csw:TransactionResponse xmlns:ogc="http://www.opengis.net/ogc"
xmlns:gml="http://www.opengis.net/gml"
xmlns:ns3="http://www.w3.org/1999/xlink"
xmlns:csw="http://www.opengis.net/cat/csw/2.0.2"
xmlns:ns5="http://www.w3.org/2001/SMIL20/"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:ows="http://www.opengis.net/ows"
xmlns:dct="http://purl.org/dc/terms/"
xmlns:ns9="http://www.w3.org/2001/SMIL20/Language"
xmlns:ns10="http://www.w3.org/2001/XMLSchema-instance"
ns10:schemaLocation="http://www.opengis.net/csw /ogc/csw/2.0.2/CSW-publication.xsd"
version="2.0.2">
<csw:TransactionSummary>
<csw:totalInserted>0</csw:totalInserted>
<csw:totalUpdated>1</csw:totalUpdated>
<csw:totalDeleted>0</csw:totalDeleted>
</csw:TransactionSummary>
</csw:TransactionResponse>
Transaction Delete Sub-Operation HTTP POST
The Delete sub-operation is a method to identify a set of records to be deleted from the catalog.
The following example shows a delete request for all records with a SpatialReferenceSystem name equal to WGS-84.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<csw:Transaction service="CSW" version="2.0.2"
xmlns:csw="http://www.opengis.net/cat/csw/2.0.2"
xmlns:gml="http://www.opengis.net/gml"
xmlns:ogc="http://www.opengis.net/ogc">
<csw:Delete typeName="csw:Record" handle="something">
<csw:Constraint version="2.0.0">
<ogc:Filter>
<ogc:PropertyIsEqualTo>
<ogc:PropertyName>SpatialReferenceSystem</ogc:PropertyName>
<ogc:Literal>WGS-84</ogc:Literal>
</ogc:PropertyIsEqualTo>
</ogc:Filter>
</csw:Constraint>
</csw:Delete>
</csw:Transaction>
The following example shows a delete operation specifying a CQL constraint to delete all records with a title equal to Aliquam fermentum purus quis arcu
1
2
3
4
5
6
7
8
9
10
11
12
13
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<csw:Transaction service="CSW" version="2.0.2"
xmlns:csw="http://www.opengis.net/cat/csw/2.0.2"
xmlns:gml="http://www.opengis.net/gml"
xmlns:ogc="http://www.opengis.net/ogc">
<csw:Delete typeName="csw:Record" handle="something">
<csw:Constraint version="2.0.0">
<ogc:CqlText>
title = 'Aliquam fermentum purus quis arcu'
</ogc:CqlText>
</csw:Constraint>
</csw:Delete>
</csw:Transaction>
Transaction Delete Response
The following is an example of an application/xml response to the Transaction Delete sub-operation:
1
2
3
4
5
6
7
8
9
10
11
12
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<csw:TransactionResponse
xmlns:csw="http://www.opengis.net/cat/csw/2.0.2"
xmlns:ns10="http://www.w3.org/2001/XMLSchema-instance"
ns10:schemaLocation="http://www.opengis.net/csw /ogc/csw/2.0.2/CSW-publication.xsd"
version="2.0.2">
<csw:TransactionSummary>
<csw:totalInserted>0</csw:totalInserted>
<csw:totalUpdated>0</csw:totalUpdated>
<csw:totalDeleted>1</csw:totalDeleted>
</csw:TransactionSummary>
</csw:TransactionResponse>
Install and Uninstall
The CSW endpoint can be installed and uninstalled using the normal processes described in the Configuration section.
Configuration
The CSW endpoint has no configurable properties. It can only be installed or uninstalled.
Known Issues
None
CSW v2.0.2 Source
The CSW source supports the ability to search collections of descriptive information (metadata) for data, services, and related information objects.
Using
Use the CSW source if querying a CSW version 2.0.2 compliant service.
Installing and Uninstalling
The CSW source can be installed and uninstalled using the normal processes described in the Configuring DDF section.
Configuring
This component can be configured using the normal processes described in the Configuring DDF section. The configurable properties for the CSW source are accessed from the CSW Federated Source Configuration in the Web Console or Admin Console.
Configure the CSW Source
| Title | Property | Type | Description | Default Value | Required  |
|---|---|---|---|---|---|
Source ID |
id |
String |
Unique Name of this Source. |
CSW |
Yes |
CSW URL |
cswUrl |
String |
URL to the Catalogue Services for the Web site that will be queried by this source |
Yes |
|
Username |
username |
String |
Username to log into the CSW service |
No  |
|
Password |
password |
String |
Password to log into the CSW service |
No  |
|
Disable CN Check |
disableCnCheck |
Boolean |
Disable the CN Check for the server certificate |
false |
Yes |
Force Longitude/Latitude coordinate order |
isLonLatOrder |
Boolean |
Force Longitude/Latitude coordinate order |
false |
Yes |
Use posList in LInearRing |
Boolean |
Use a <posList> element rather that a series of <pos> elements when issuing geospatial queries containing a LinearRing |
false |
Yes |
|
Effective Date maps to |
effectiveDateMapping |
String |
The field in the CSW Record that should be mapped to a Metacard’s effective date. This field will have a default mapping, but the user can change this to be any date formatted field in a CSW Record. Relevant CSW fields would include dateSubmitted, created and modified. If no value is specified, the default value of created will be used. Note that the same CSW Record field cannot be used more than once in these date mapping properties. |
created |
No  |
Created Date maps to |
createdDateMapping |
String |
The field in the CSW Record that should be mapped to a Metacard’s created date. This field will have a default mapping, but the user can change this to be any date formatted field in a CSW Record. Relevant CSW fields would include dateSubmitted, created and modified. If no value is specified, the default value of dateSubmitted will be used. Note that the same CSW Record field cannot be used more than once in these date mapping properties. |
dateSubmitted |
No |
Modified Date maps to |
modifiedDateMapping |
String |
The field in the CSW Record that should be mapped to a Metacard’s effective date. This field will have a default mapping, but the user can change this to be any date formatted field in a CSW Record. Relevant CSW fields would include dateSubmitted, created and modified. If no value is specified, the default value of created will be used. Note that the same CSW Record field cannot be used more than once in these date mapping properties. |
modified |
No  |
Resource URI maps to |
resourceUriMapping |
String |
CSW field to map to Metacard’s resource URI to retrieve product associated with the CSW record. |
source |
No |
Thumbnail maps to |
thumbnailMapping |
String |
CSW field to map to Metacard’s thumbnail URI to retrieve thumbnail data associated with the CSW record. |
references |
No  |
Content type maps to |
contentTypemapping |
String |
CSW field to map to Metacard’s content type. |
type |
No  |
Content Types |
contentTypeNames |
List of Strings |
A list of content types that can be searched on. The user can add any content types to the list, e.g., doc, or even wildcarded types. The list of content types currently in the CSW source will be added to this list during configuration when the GetCapabilities response is returned. |
No |
|
Poll Interval |
pollInterval |
Integer |
Poll Interval to Check if the Source is available (in minutes - minimum 1) |
5 |
Yes |
Connection Timeout |
connectionTimeout |
Integer |
Amount of time (in milliseconds) to attempt to establish a connection before timing out. |
30000 |
Yes  |
Receive Timeout |
receiveTimeout |
Integer |
Amount of time (in milliseconds) to attempt to establish a connection before timing out. |
60000 |
Yes |
Output schema |
outputSchema |
String |
Output Schema |
Yes |
|
Force CQL Text as the Query Language |
isCqlForced |
Boolean |
Force CQL Text |
false |
Yes |
Known Issues
-
The CSW Source does not support text path searches.
-
All contextual searches are case sensitive; case-insensitive searches are not supported.
-
Nearest neighbor spatial searches are not supported.
-
Fuzzy contextual searches are not supported.
Integrating DDF with KML
Keyhole Markup Language (KML) is an XML notation for describing geographic annotation and visualization for 2- and 3- dimensional maps.
KML Network Link Endpoint
The KML Network Link endpoint allows a user to generate a view-based KML Query Results Network Link. This network link can be opened with Google Earth, establishing a dynamic connection between Google Earth and DDF. The root network link will create a network link for each configured source, including the local catalog. The individual source network links will perform a query against the OpenSearch Endpoint periodically based on the current view in the KML client. The query parameters for this query are obtained by a bounding box generated by Google Earth. The root network link will refresh every 12 hours or can be forced to refresh. As a user changes their current view, the query will be re-executed with the bounding box of the new view. (This query gets re-executed two seconds after the user stops moving the view.)
Using
Once installed, the KML Network Link endpoint can be accessed at:
http://<DDF_HOST>:<DDF_PORT>/services/catalog/kml
After the above request is sent, a KML Network Link document is returned as a response to download or open. This KML Network Link can then be opened in Google Earth.
Example Output
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<kml xmlns="http://www.opengis.net/kml/2.2" xmlns:ns2="http://www.google.com/kml/ext/2.2"
xmlns:ns3="http://www.w3.org/2005/Atom" xmlns:ns4="urn:oasis:names:tc:ciq:xsdschema:xAL:2.0">
<NetworkLink>
<name>DDF</name>
<open xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xs="http://www.w3.org/2001/XMLSchema" xsi:type="xs:boolean">true</open>
<Snippet maxLines="0"/>
<Link>
<href>http://0.0.0.0:8181/services/catalog/kml/sources</href>
<refreshMode>onInterval</refreshMode>
<refreshInterval>43200.0</refreshInterval>
<viewRefreshMode>never</viewRefreshMode>
<viewRefreshTime>0.0</viewRefreshTime>
<viewBoundScale>0.0</viewBoundScale>
</Link>
</NetworkLink>
</kml>
When configured to do so, the KML endpoint can serve up a KML style document. The request below will return the configured KML style document. For more information on how to configure the KML style document, see Configuration.
http://<DDF_HOST>:<DDF_PORT>/services/catalog/kml/style
The KML endpoint can also serve up Icons to be used in conjunction with the KML style document. The request below shows the format to return an icon. For more information on how to configure the KML Icons document, see Configuration.
http://<DDF_HOST>:<DDF_PORT>/services/catalog/kml/icons?<icon-name> #NOTE: <icon-name> must be the name of an icon contained in the directory being served up like: http://<DDF_HOST>:<DDF_PORT>/services/catalog/kml/icons?sample-icon.png
Installing and Uninstalling
The spatial-kml-networklinkendpoint feature is installed by default with the Spatial App.
Configuring
This KML Network Link endpoint has the ability to serve up custom KML style documents and Icons to be used within that document. The KML style document must be a valid XML document containing a KML style. The KML Icons should be placed in a single level directory and must be an image type (png, jpg, tif, etc.). The Description will be displayed as a pop-up from the root network link on Google Earth. This may contain the general purpose of the network and URLs to external resources.
Configurable Properties
| Title | Property | Type | Description | Default Value | Required |
|---|---|---|---|---|---|
Style Document |
styleUrl |
String |
KML document containing custom styling. This will be served up by the KmlEndpoint (e.g., file:///path/to/kml/style/doc.kml). |
No |
|
Icons Location |
iconLoc |
String |
Location of icons for KmlEndpoint. |
No |
|
Description |
description |
String |
Description of this NetworkLink. Enter a short description of what this NetworkLink provides. |
No |
Known Issues
None.
KML Query Response Transformer
The KML Query Response Transformer is responsible for translating a query response into a KML-formatted document. The KML will contain an HTML description for each metacard that will display in the pop-up bubble in Google Earth. The HTML contains links to the full metadata view as well as the product.
Using
Using the OpenSearch Endpoint, for example, query with the format option set to the KML shortname: kml.
http://localhost:8181/services/catalog/query?q=schematypesearch&format=kml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<kml xmlns:ns2="http://www.google.com/kml/ext/2.2" xmlns="http://www.opengis.net/kml/2.2" xmlns:ns4="urn:oasis:names:tc:ciq:xsdschema:xAL:2.0" xmlns:ns3="http://www.w3.org/2005/Atom">
<Document id="f0884d8c-cf9b-44a1-bb5a-d3c6fb9a96b6">
<name>Results (1)</name>
<open xsi:type="xs:boolean" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">false</open>
<Style id="bluenormal">
<LabelStyle>
<scale>0.0</scale>
</LabelStyle>
<LineStyle>
<color>33ff0000</color>
<width>3.0</width>
</LineStyle>
<PolyStyle>
<color>33ff0000</color>
<fill xsi:type="xs:boolean" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">true</fill>
</PolyStyle>
<BalloonStyle>
<text><h3><b>$[name]</b></h3><table><tr><td width="400">$[description]</td></tr></table></text>
</BalloonStyle>
</Style>
<Style id="bluehighlight">
<LabelStyle>
<scale>1.0</scale>
</LabelStyle>
<LineStyle>
<color>99ff0000</color>
<width>6.0</width>
</LineStyle>
<PolyStyle>
<color>99ff0000</color>
<fill xsi:type="xs:boolean" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">true</fill>
</PolyStyle>
<BalloonStyle>
<text><h3><b>$[name]</b></h3><table><tr><td width="400">$[description]</td></tr></table></text>
</BalloonStyle>
</Style>
<StyleMap id="default">
<Pair>
<key>normal</key>
<styleUrl>#bluenormal</styleUrl>
</Pair>
<Pair>
<key>highlight</key>
<styleUrl>#bluehighlight</styleUrl>
</Pair>
</StyleMap>
<Placemark id="Placemark-0103c77e66d9428d8f48fab939da528e">
<name>MultiPolygon</name>
<description><!DOCTYPE html>
<html>
<head>
<meta content="text/html; charset=windows-1252" http-equiv="content-type">
<style media="screen" type="text/css">
.label {
font-weight: bold
}
.linkTable {
width: 100% }
.thumbnailDiv {
text-align: center
} img {
max-width: 100px;
max-height: 100px;
border-style:none
}
</style>
</head>
<body>
<div class="thumbnailDiv"><a
href="http://localhost:8181/services/catalog/sources/ddf.distribution/0103c77e66d9428d8f
48fab939da528e?transform=resource"><img alt="Thumnail"
src="data:image/jpeg;charset=utf-8;base64, CA=="></a></div>
<table>
<tr>
<td class="label">Source:</td>
<td>ddf.distribution</td>
</tr>
<tr>
<td class="label">Created:</td>
<td>Wed Oct 30 09:46:29 MDT 2013</td>
</tr>
<tr>
 <td class="label">Effective:</td>
<td>2014-01-07T14:48:47-0700</td>
</tr>
</table>
<table class="linkTable">
<tr>
<td><a
href="http://localhost:8181/services/catalog/sources/ddf.distribution/0103c77e66d9428d8f
48fab939da528e?transform=html">View Details...</a></td>
<td><a
href="http://localhost:8181/services/catalog/sources/ddf.distribution/0103c77e66d9428d8f
48fab939da528e?transform=resource">Download...</a></td>
</tr>
</table>
</body>
</html>
</description>
<TimeSpan>
<begin>2014-01-07T21:48:47</begin>
</TimeSpan>
<styleUrl>#default</styleUrl>
<MultiGeometry>
<Point>
<coordinates>102.0,2.0</coordinates>
</Point>
<MultiGeometry>
<Polygon>
<outerBoundaryIs>
<LinearRing>
<coordinates>102.0,2.0 103.0,2.0 103.0,3.0 102.0,3.0
102.0,2.0</coordinates>
</LinearRing>
100.8,0.2
</outerBoundaryIs>
</Polygon>
<Polygon>
<outerBoundaryIs>
<LinearRing>
<coordinates>100.0,0.0 101.0,0.0 101.0,1.0 100.0,1.0 100.0,0.0 100.2,0.2
100.8,0.8 100.2,0.8 100.2,0.2</coordinates>
</LinearRing>
</outerBoundaryIs>
</Polygon>
</MultiGeometry>
</MultiGeometry>
</Placemark>
</Document>
</kml>
Installing and Uninstalling
The spatial-kml-transformer feature is installed by default with the Spatial App.
http://localhost:8181/services/catalog/0103c77e66d9428d8f48fab939da528e?transform=kml
Configuring
None.
Implementation Details
| Transformer Shortname | kml |
|---|---|
MIME Type |
application/vnd.google-earth.kml+xml |
Known Issues
None.
KML Metacard Transformer
The KML Metacard Transformer is responsible for translating a metacard into a KML-formatted document. The KML will contain an HTML description that will display in the pop-up bubble in Google Earth. The HTML contains links to the full metadata view as well as the product.
Using
Using the REST Endpoint for example, request a metacard with the transform option set to the KML shortname.
Example Output
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<kml xmlns:ns2="http://www.google.com/kml/ext/2.2" xmlns="http://www.opengis.net/kml/2.2" xmlns:ns4="urn:oasis:names:tc:ciq:xsdschema:xAL:2.0" xmlns:ns3="http://www.w3.org/2005/Atom">
<Placemark id="Placemark-0103c77e66d9428d8f48fab939da528e">
<name>MultiPolygon</name>
<description><!DOCTYPE html>
<html>
<head>
<meta content="text/html; charset=windows-1252" http-equiv="content-type">
<style media="screen" type="text/css">
.label {
font-weight: bold
}
.linkTable {
width: 100% }
.thumbnailDiv {
text-align: center
}
img {
max-width: 100px;
 max-height: 100px;
border-style:none
}
</style>
</head>
<body>
<div class="thumbnailDiv"><a href="http://localhost:8181/services/catalog/sources/ddf.distribution/0103c77e66d9428d8f48fab939da528e?transform=resource"><img alt="Thumnail" src="data:image/jpeg;charset=utf-8;base64, CA=="></a></div>
<table>
<tr>
<td class="label">Source:</td>
<td>ddf.distribution</td>
</tr>
<tr>
<td class="label">Created:</td>
<td>Wed Oct 30 09:46:29 MDT 2013</td>
</tr>
<tr>
<td class="label">Effective:</td>
<td>2014-01-07T14:58:16-0700</td>
</tr>
</table>
<table class="linkTable">
<tr>
<td><a href="http://localhost:8181/services/catalog/sources/ddf.distribution/0103c77e66d9428d8f48fab939da528e?transform=html">View Details...</a></td>
<td><a href="http://localhost:8181/services/catalog/sources/ddf.distribution/0103c77e66d9428d8f48fab939da528e?transform=resource">Download...</a></td>
</tr>
</table>
</body>
</html>
</description>
<TimeSpan>
<begin>2014-01-07T21:58:16</begin>
</TimeSpan>
<Style id="bluenormal">
<LabelStyle>
<scale>0.0</scale>
</LabelStyle>
<LineStyle>
<color>33ff0000</color>
<width>3.0</width>
</LineStyle>
<PolyStyle>
<color>33ff0000</color>
<fill xsi:type="xs:boolean" xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">true</fill>
</PolyStyle>
<BalloonStyle>
<text><h3><b>$[name]</b></h3><table><tr><td
width="400">$[description]</td></tr></table></text>
</BalloonStyle>
</Style>
<Style id="bluehighlight">
<LabelStyle>
<scale>1.0</scale>
</LabelStyle>
<LineStyle>
<color>99ff0000</color>
<width>6.0</width>
</LineStyle>
<PolyStyle>
<color>99ff0000</color>
<fill xsi:type="xs:boolean" xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">true</fill>
</PolyStyle>
<BalloonStyle>
<text><h3><b>$[name]</b></h3><table><tr><td width="400">$[description]</td></tr></table></text>
</BalloonStyle>
</Style>
<StyleMap id="default">
<Pair>
<key>normal</key>
<styleUrl>#bluenormal</styleUrl>
</Pair>
<Pair>
<key>highlight</key>
<styleUrl>#bluehighlight</styleUrl>
</Pair>
</StyleMap>
<MultiGeometry>
<Point>
<coordinates>102.0,2.0</coordinates>
</Point>
<MultiGeometry>
<Polygon>
<outerBoundaryIs>
<LinearRing>
<coordinates>102.0,2.0 103.0,2.0 103.0,3.0 102.0,3.0 102.0,2.0</coordinates>
</LinearRing>
</outerBoundaryIs>
</Polygon>
<Polygon>
100.8,0.2
<outerBoundaryIs>
<LinearRing>
<coordinates>100.0,0.0 101.0,0.0 101.0,1.0 100.0,1.0 100.0,0.0 100.2,0.2 100.8,0.8 100.2,0.8 100.2,0.2</coordinates>
</LinearRing>
</outerBoundaryIs>
</Polygon>
</MultiGeometry>
</Placemark>
</kml>
Installing and Uninstalling
The spatial-kml-transformer feature is installed by default with the Spatial App.
Configuring
None.
Implementation Details
| Transformer Shortname | kml |
|---|---|
MIME Type |
application/vnd.google-earth.kml+xml |
Known Issues
None.
KML Style Mapper
The KML Style Mapper provides the ability for the KmlTransformer to map a KML Style URL to a metacard based on that metacard’s attributes. For example, if a user wanted all JPEGs to be blue, the KML Style Mapper provides the ability to do so. This would also allow an administrator to configure metacards from each source to be different colors.
The configured style URLs are expected to be HTTP URLs. For more information on style URL’s, refer to the KML Reference ( https://developers.google.com/kml/documentation/kmlreference#styleurl).
The KML Style Mapper supports all basic and extended metacard attributes. When a style mapping is configured, the resulting transformed KML contain a <styleUrl> tag pointing to that style, rather than the default KML style supplied by the KmlTransformer.
Configuring
The properties below describe how to configure a Style Mapping. The configuration name is Spatial KML Style Map Entry.
| Title | Property | Type | Description | Default Value | Required |
|---|---|---|---|---|---|
Attribute Name |
attributeName |
The name of the metacard attribute to match against (e.g., title, metadata-content-type). |
String |
Yes |
|
Attribute Value |
attributeValue |
String |
The value of the metacard attribute. |
Yes |
|
Style URL |
styleUrl |
String |
The full qualified URL to the KML style (e.g., http://example.com/styles#myStyle). |
Yes |
Example Values
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
xmlns="http://www.opengis.net/kml/2.2"
xmlns:ns4="urn:oasis:names:tc:ciq:xsdschema:xAL:2.0"
xmlns:ns3="http://www.w3.org/2005/Atom">
<Placemark id="Placemark-0103c77e66d9428d8f48fab939da528e">
<name>MultiPolygon</name>
<description><!DOCTYPE html>
<html>
<head>
<meta content="text/html; charset=windows-1252" http-equiv="content-type">
<style media="screen" type="text/css">
.label {
font-weight: bold
}
.linkTable {
width: 100% }
.thumbnailDiv {
text-align: center
} img {
max-width: 100px;
max-height: 100px;
border-style:none
}
</style>
</head>
<body>
<div class="thumbnailDiv"><a
href="http://localhost:8181/services/catalog/sources/ddf.distribution/0103c77e66d9428d8f48fab939da528e?transform=resource"><img alt="Thumnail"
src="data:image/jpeg;charset=utf-8;base64, CA=="></a></div>
<table>
<tr>
<td class="label">Source:</td>
<td>ddf.distribution</td>
</tr>
<tr>
<td class="label">Created:</td>
<td>Wed Oct 30 09:46:29 MDT 2013</td>
</tr>
<tr>
<td class="label">Effective:</td>
<td>2014-01-07T14:58:16-0700</td>
</tr>
</table>
<table class="linkTable">
<tr>
<td><a
href="http://localhost:8181/services/catalog/sources/ddf.distribution/0103c77e66d9428d8f48fab939da528e?transform=html">View Details...</a></td>
<td><a href="http://localhost:8181/services/catalog/sources/ddf.distribution/0103c77e66d9428d8f48fab939da528e?transform=resource">Download...</a></td>
</tr>
</table>
</body>
</html>
</description>
<TimeSpan>
<begin>2014-01-07T21:58:16</begin>
</TimeSpan>
<styleUrl>http://example.com/kml/style#sampleStyle</styleUrl>
<MultiGeometry>
<Point>
<coordinates>102.0,2.0</coordinates>
</Point>
<MultiGeometry>
<Polygon>
<outerBoundaryIs>
<LinearRing>
<coordinates>102.0,2.0 103.0,2.0 103.0,3.0 102.0,3.0
102.0,2.0</coordinates>
</LinearRing>
</outerBoundaryIs>
</Polygon>
<Polygon>
100.8,0.2
<outerBoundaryIs>
<LinearRing>
<coordinates>100.0,0.0 101.0,0.0 101.0,1.0 100.0,1.0 100.0,0.0 100.2,0.2
100.8,0.8 100.2,0.8 100.2,0.2</coordinates>
</LinearRing>
</outerBoundaryIs>
</Polygon>
</MultiGeometry>
</MultiGeometry>
</Placemark>
</kml>
Installing and Uninstalling
The KML Style Mapper is included in the spatial-kml-transformer feature and is installed by default with the Spatial App.
Implementation Details
| Transformer Shortname | kml |
|---|---|
MIME Type |
application/vnd.google-earth.kml+xml |
Known Issues
None.
Integrating DDF with WFS
The Web Feature Service (WFS) is an Open Geospatial Consortium (OGC) Specification. DDF supports the ability to integrate WFS 1.0 and WFS 2.0 Web Services.
|
DDF does not include a supported WFS Web Service (Endpoint) implementation. |Therefore, federation for 2 DDF instances is not possible via WFS. |
Working with WFS Sources
A Web Feature Service (WFS) source is an implementation of the FederatedSource interface provided by the DDF Framework. A WFS source provides capabilities for querying an Open Geospatial Consortium (OGC) WFS 1.0.0-compliant server. The results are made available to DDF clients.
WFS Features
When a query is issued to a WFS server, the output of the query is an XML document that contains a collection of feature member elements. Each WFS server can have one or more feature types with each type being defined by a schema that extends the WFS featureMember schema. The schema for each type can be discovered by issuing a DescribeFeatureType request to the WFS server for the feature type in question. The WFS source handles WFS |capability discovery and requests for feature type description when an |instance of the WFS source is configured and created.
See the WFS v1.0.0 Source for more information about how to configure a WFS source.
Convert a WFS Feature
In order to expose WFS features to DDF clients, the WFS feature must be converted into the common data format of the DDF, a metacard. The OGC package contains a GenericFeatureConverter that attempts to populate mandatory metacard fields with properties from the WFS feature XML.
All properties will be mapped directly to new attributes in the metacard. However, the GenericFeatureConverter may not be able to populate the default metacard fields with properties from the feature XML.
Create a Custom Converter
To more accurately map WFS feature properties to fields in the metacard, a custom converter can be created. The OGC package contains an interface, FeatureConverter, which extends the Converter ( http://xstream.codehaus.org/javadoc/com/thoughtworks/xstream/converters/Converter.html) interface provided by the XStream ( http://xstream.codehaus.org/) project. XStream is an open source API for serializing XML into Java objects and vice-versa. Additionally, a base class, AbstractFeatureConverter, has been created to handle the mapping of many fields to reduce code duplication in the custom converter classes.
-
Create the
CustomConverterclass extending theogc.catalog.common.converter.AbstractFeatureConverterclass. 
public class CustomConverter extends ogc.catalog.common.converter.AbstractFeatureConverter
-
Implement the
FeatureConverterFactoryinterface and thecreateConverter()method for theCustomConverter. 
public class CustomConverterFactory implements FeatureConverterFactory {
private final featureType;
public CustomConverterFactory(String featureType) {
this.featureType = featureType;
}
public FeatureConverter createConverter() {
return new CustomConverter();
}
public String getFeatureType() {
return featureType;
} }
-
Implement the
unmarshalmethod required by theFeatureConverterinterface. ThecreateMetacardFromFeature(reader, metacardType)method implemented in theAbstractFeatureConverteris recommended.
1
2
3
4
5
6
7
8
public Metacard unmarshal(HierarchicalStreamReader reader, UnmarshallingContext ctx) {
MetacardImpl mc = createMetacardFromFeature(reader, metacardType);
//set your feature specific fields on the metacard object here
//
//if you want to map a property called "beginningDate" to the Metacard.createdDate field
//you would do:
mc.setCreatedDate(mc.getAttribute("beginningDate").getValue());
}
-
Export the
ConverterFactoryto the OSGi registry by creating a blueprint.xml file for its bundle. The bean id and argument value must match the WFS Feature type being converted.
1
2
3
4
5
6
7
<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:cm="http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.1.0">
<bean id="custom_type" class="com.example.converter.factory.CustomConverterFactory">
<argument value="custom_type"/>
</bean>
<service ref="custom_type" interface="ogc.catalog.common.converter.factory.FeatureConverterFactory"/>
</blueprint>
For more information about registering services, see Working with OSGi.
WFS v1.0.0 Source
The WFS Source allows for requests for geographical features across the web using platform-independent calls.
Using
Use the WFS Source if querying a WFS version 1.0.0 compliant service. Also see Working with WFS Sources.
Installing and Uninstalling
The WFS Source can be installed and uninstalled using the normal processes described in the Configuring DDF section.
Configuring
This component can be configured using the normal processes described in the Configuring DDF section.
The configurable properties for the WFS Source are accessed from the WFS Federated Source Configuration in the Admin Console.
Configuring WFS Source
Configurable Properties
| Title | Property | Type | Description | Default Value | Required |
|---|---|---|---|---|---|
Source ID |
id |
String |
Unique name of the Source. |
WFS_v1_0_0 |
Yes |
WFS URL |
wfsUrl |
String |
URL to the Web Feature Service (WFS) that will be queried by this source (see below). |
Yes |
|
Disable CN Check |
disableCnCheck |
Boolean |
Disable CN check for the server certificate. This should only be used when testing. |
false |
Yes |
Username |
username |
String |
Username to log in to the WFS service. Password to log in to the WFS service. |
No |
|
Password |
password |
String |
Password to log in to the WFS service. |
No |
|
Non Queryable Properties |
nonQueryableProperties |
List of Strings |
Multivalued list of properties in the feature XML that should not be used as filters. |
No |
|
Poll Interval |
pollInterval |
Integer |
Poll interval to check if the source is available (in minutes; minimum = 1). |
5 |
Yes |
Forced Spatial Filter Type |
forceSpatialFilter |
String |
Force the selected Spatial Filter Type to be the only available Spatial Filter. |
None |
No |
Connection Timeout |
connectionTimeout |
Integer |
Amount of time to attempt to establish a connection before timing out, in milliseconds |
30000 |
Yes |
Receive Timeout |
receiveTimeout |
Integer |
Amount of time to wait for a response before timing out, in milliseconds. |
60000 |
Yes |
WFS URL
The WFS URL must match the endpoint for the service being used. The type of service and version are added automatically, so they do not need to be included. Some servers will throw an exception if they are included twice, so do not include those.
The syntax depends on the server. However, in most cases, the syntax will be everything before the ? character in the URL that corresponds to the GetCapabilities query.
As an example, GeoServer 2.5 syntax might look like:
http://www.example.org:8080/geoserver/ows?service=wfs&version=1.0.0&request=GetCapabilities
In this case, the WFS URL would be
http://www.example.org:8080/geoserver/ows
Known Issues
None.
WFS v2.0.0 Source
The WFS 2.0 Source allows for requests for geographical features across the web using platform-independent calls.
Using
Use the WFS Source if querying a WFS version 2.0.0 compliant service. Also see Working with WFS Sources.
Installing and Uninstalling
The WFS Source can be installed and uninstalled using the normal processes described in the Configuring DDF section.
Configuring
This component can be configured using the normal processes described in the Configuring DDF section.
The configurable properties for the WFS 2.0.0 Source are accessed from the WFS 2.0.0 Federated Source Configuration in the Admin Console.
Configuring WFS 2.0.0 Source
Configurable Properties
| Title | Property | Type | Description | Default Value | Required |
|---|---|---|---|---|---|
Source ID |
id |
String |
Unique name of the source |
WFS_v2_0_0 |
Yes |
WFS URL |
wfsUrl |
String |
URL to the endpoint implementing the Web Feature Service (WFS) 2.0.0 spec. |
Yes |
|
Disable CN Check |
disableCnCheck |
Boolean |
Disable CN Check for the server certificate. This should only be used when testing. |
false |
Yes |
Coordinate Order |
coordinateOrder |
String |
Coordinate order that remote source expects and returns spatial data in. |
Lat/Lon |
Yes |
Disable Sorting |
disableSorting |
Boolean |
When selected, the system will not specify sort criteria with the query. This should only be used if the remote source is unable to handle sorting even when the capabilities states 'ImplementsSorting' is supported. |
false |
Yes |
Username |
username |
String |
Username for the WFS service. |
No |
|
Password |
password |
String |
Password for the WFS service. |
No |
|
Non Queryable Properties |
nonQueryableProperties |
List of Strings |
Properties listed here will NOT be queryable and any attempt to filter on these properties will result in an exception. |
No |
|
Poll Interval |
pollInterval |
Integer |
Poll interval to check if the source is available (in minutes; minimum = 1). |
5 |
Yes |
Forced Spatial Filter Type |
forceSpatialFilter |
String |
Force only the selected Spatial Filter Type as the only available Spatial Filter. |
No |
|
Connection Timeout |
connectionTimeout |
Integer |
Amount of time to attempt to establish a connection before timing out, in milliseconds |
30000 |
Yes |
Receive Timeout |
receiveTimeout |
Integer |
Amount of time to wait for a response before timing out, in milliseconds. |
60000 |
Yes |
WFS URL
The WFS URL must match the endpoint for the service being used. The type of service and version is added automatically, so they do not need to be included. Some servers will throw an exception if they are included twice, so do not include those.
The syntax depends on the server. However, in most cases, the syntax will be everything before the ? character in the URL that corresponds to the GetCapabilities query.
As an example, GeoServer 2.5 syntax might look like:
http://www.example.org:8080/geoserver/ows?service=wfs&version=2.0.0&request=GetCapabilities
In this case, the WFS URL would be
http://www.example.org:8080/geoserver/ows
Known Issues
None.
Mapping WFS Feature Properties to Metacard Attributes
The WFS 2.0 Source allows for virtually any schema to be used to describe a feature. A feature is relatively equivalent to a metacard. The MetacardMapper was added to allow an administrator to configure which feature properties map to which metacard attributes.
Using
Use the WFS MetacardMapper to configure which feature properties map to which metacard attributes when querying a WFS version 2.0.0 compliant service. When feature collection responses are returned from WFS sources, a default mapping occurs which places the feature properties into metacard attributes, which are then presented to the user via DDF. There can be situations where this automatic mapping is not optimal for your solution. Custom mappings of feature property responses to metacard attributes can be achieved through the MetacardMapper. The MetacardMapper is set by creating a feature file configuration which specifies the appropriate mapping. The mappings are specific to a given feature type.
Also see Working with WFS Sources for more advanced use cases.
Installing and Uninstalling
The WFS MetacardMapper can be installed and uninstalled using the normal processes described in the Configuring DDF section.
Configuring
This component can be configured using the normal processes described in the Configuring DDF section.
The configurable properties for the WFS MetacardMapper are accessed from the Metacard to WFS Feature Map Configuration in the Admin Console.
Configuring WFS MetacardMapper
Configurable Properties
| Title | Property | Type | Description | Default Value | Required |
|---|---|---|---|---|---|
Feature Type |
featureType |
String |
Feature Type. Format is {URI}local-name |
Yes |
|
Metacard Attribute to WFS Feature Property Mapping |
metacardAttrToFeaturePropMap |
String |
Metacard Attribute to WFS Feature Property Mapping. Format is metacardAttribute=featureProperty |
Yes |
|
Temporal Sort By Feature Property |
sortByTemporalFeatureProperty |
String |
When Sorting Temporally, Sort By This Feature Property. |
No |
|
Relevance Sort By Feature Property |
sortByRelevanceFeatureProperty |
String |
When Sorting By Distance, Sort By This Feature Property. |
No |
|
Distance Sort By Feature Property |
sortByDistanceFeatureProperty |
String |
When Sorting By Relevance, Sort By This Feature Property. |
No |
Example Configuration
There are two ways to configure the MetcardMapper, one is to use the Configuration Admin available via the Web Admin Console. Additionally, a feature.xml file can be created and copied into the "deploy" directory. The following shows how to configure the MetacardMapper to be used with the sample data provided with GeoServer. This configuration shows a custom mapping for the feature type ‘states’. For the given type, we are taking the feature property ‘states.STATE_NAME’ and mapping it to the metacard attribute ‘title’. In this particular case, since we mapped the state name to title in the metacard, it will now be fully searchable. More mappings can be added to the featurePropToMetacardAttrMap line through the use of comma as a delimiter.
Below is an example of a MetacardMapper configuration within a feature.xml file:
1
2
3
4
5
6
7
8
<feature name="geoserver-states" version="2.8.2"
description="WFS Feature to Metacard mappings for GeoServer Example {http://www.openplans.org/topp}states">
<config name="org.codice.ddf.spatial.ogc.wfs.catalog.mapper.MetacardMapper-geoserver.http://www.openplans.org/topp.states">
featureType = {http://www.openplans.org/topp}states
service.factoryPid = org.codice.ddf.spatial.ogc.wfs.catalog.mapper.MetacardMapper
featurePropToMetacardAttrMap = states.STATE_NAME=title
</config>
</feature>
Known Issues
None.
Overview
The DDF Standard Search UI application allows a user to search for records in the local Catalog (provider) and federated sources. Results of the search are returned in HTML format and are displayed on a globe, providing a visual representation of where the records were found.
This page supports integration of this application with external frameworks.
Extending DDF
Overview
This guide discusses the several extension points and components permitted by the Distributed Data Framework (DDF) Catalog API. Using code examples, diagrams, and references to specific instances of the Catalog API, this guide provides details on how to develop and integrate various DDF components.
Building DDF
Prerequisites
-
Install J2SE 8 SDK (http://www.oracle.com/technetwork/java/javase/downloads/index.html).
-
Verify that the JAVA_HOME environment variable is set to the newly installed JDK location, and that the PATH includes
%JAVA_HOME%\bin(for Windows) or$JAVA_HOME$/bin(*nix). Install Maven 3.1.0 or later (http://maven.apache.org/download.cgi). Verify that the PATH includes theMVN_HOME/bindirectory. In addition, access to a Maven repository with the latest project artifacts and dependencies is necessary in order for a successful build. The following sample settings.xml (the default settings file) can be used to access the public repositories with the required artifacts. For more help on how to use the settings.xml file, refer to the Maven settings reference page (http://maven.apache.org/settings.html).
1
2
3
4
5
6
7
8
<settings>
<!-- If proxy is needed
<proxies>
<proxy>
</proxy>
</proxies>
-->
</settings>
|
Handy Tip on Encrypting Passwords
See this Maven guide on how to encrypt the passwords in your settings.xml. |
Procedures
Run the Build
|
In order to run through a full build, be sure to have a clone for all repositories in the same folder: ddf (https://github.com/codice/ddf.git) |
-
Build command example for one individual repository.
# Build is run from the top level of the specified repository in a command line prompt or terminal. cd ddf-support mvn clean install # At the end of the build, a BUILD SUCCESS will be displayed.
-
Build command example to build all repositories. This must be performed at the top level folder that contains all the repositories. A command list would look like this:
# Build is run from the top level folder that contains all the repositories in a command line prompt or terminal. cd ddf-support mvn clean install cd ../ddf-parent mvn clean install cd ../ddf-platform mvn clean install cd ../ddf-admin mvn clean install cd ../ddf-catalog mvn clean install cd ../ddf-content mvn clean install cd ../ddf-spatial MVN_HOME clean install cd ../ddf-ui mvn clean install cd ../ddf mvn clean install # This will fully compile each individual app. From here you may hot deploy the necessary apps on top of the DDF Kernel.
|
To use the updated apps in a DDF distribution, update the versions referenced in the "ddf" repository. |
|
[The zip distribution of DDF is contained in the DDF app in the distribution/ddf/target directory after the DDF app is built. |
-
Optionally, create a reactor pom that will allow you to perform the entire build process by calling a build on one pom rather than all of them. This pom must reside in the top-level folder that holds all the repositories. An example of the file would be:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>org.codice.ddf</groupId>
<artifactId>reactor</artifactId>
<version>1.0.0-SNAPSHOT</version>
<packaging>pom</packaging>
<name>DDF Reactor</name>
<description>Distributed Data Framework (DDF) is an open source, modular integration framework</description>
<modules>
<module>ddf-support</module>
<module>ddf-parent</module>
<module>ddf-platform</module>
<module>ddf-admin</module>
<module>ddf-security</module>
<module>ddf-catalog</module>
<module>ddf-content</module>
<module>ddf-spatial</module>
<module>ddf-solr</module>
<module>ddf-ui</module>
<module>ddf</module>
</modules>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-deploy-plugin</artifactId>
<version>2.4</version>
<configuration>
<!-- Do not deploy the reactor pom -->
<skip>true</skip>
</configuration>
</plugin>
</plugins>
</build>
</project>
|
It may take several moments for Maven to download the required dependencies in the first build. Build times may vary based on network speed and machine specifications. |
|
In certain circumstances, the build may fail due to a 'java.lang.OutOfMemory: Java heap space' error. This error is due to the large number of sub-modules in the DDF build, which causes the heap space to run out in the main Maven JVM. To fix this issue, set the system variable |
Troubleshoot Build Errors on ddf-admin and ddf-ui on a Windows Platform
Currently, the developers are using the following tools:
| Name | Version |
|---|---|
bower |
1.3.2 |
node.js |
v0.10.26 |
npm |
1.4.3 |
|
There have been intermittent build issues during the bower install. The error code that shows is an EPERM related to either 'renaming' files or 'unlinking' files. This issue has been tracked multiple times on the bower github page. The following link contains the most recent issue that was tracked: https://github.com/bower/bower/issues/991 This issue will be closely monitored for a full resolution. Until a proper solution is found, there are some options that may solve the issue. . Re-run the build. Occasionally, the issue occurs on first run and will resolve itself on the next. . Clean out the cache. There may be a memory issue, and a cache clean may help solve the issue. + bower cache clean npm cache clean + .Reinstall bower. An occasional reinstall may solve the issue. + npm uninstall -g bower && npm install -g bower + . Download and use Cygwin to perform the build. This may allow a user to simulate a run on a *nix system, which may not experience these issues. These options are taken from suggestions provided on github issue tickets. There have been several tickets created and closed, and several workarounds have been suggested. However, it appears that the issue still exists. Once more information develops on the resolution of this issue, this page will be updated. |
DDF Development Prerequisites
|
Development requires full knowledge of the DDF Catalog. |
DDF is written in Java and requires a moderate amount of experience with the Java programming language, along with Java terminology, such as packages, methods, classes, and interfaces. DDF uses a small OSGi runtime to deploy components and applications. Before developing for DDF, it is necessary that developers have general knowledge on OSGi and the concepts used within. This includes, but is not limited to, Catalog Commands and the following topics:
-
The Service Registry
-
How services are registered
-
How to retrieve service references
-
-
Bundles
-
Their role in OSGi
-
How they are developed
-
Documentation on OSGi can be viewed at the OSGi Alliance website (http://www.osgi.org). Helpful literature for beginners includes OSGi and Apache Felix 3.0 Beginner’s Guide by Walid Joseph Gédéon and OSGi in Action: Creating Modular Applications in Java by Richard Hall, Karl Pauls, Stuart McCulloch, and David Savage. For specific code examples from DDF, source code can be seen in the OSGi Services section.
Getting Set Up
To develop on DDF, access to the source code via Github is required.
Integrated Development Environments (IDE)
The DDF source code is not tied to any particular IDE. However, if a developer is interested in setting up the Eclipse IDE, they can view the Sonatype guide (http://books.sonatype.com/m2eclipse-book/reference/) on developing with Eclipse.
Additional Documentation
Additional documentation on developing with the core technologies used by DDF can be found on their respective websites.
Notably: * Karaf (http://karaf.apache.org/) * CXF (http://cxf.apache.org/docs/overview.html) * Geotools (http://docs.geotools.org/latest/developer/)
Major Directories
During DDF installation, the major directories shown in the table below are created, modified, or replaced in the destination directory.
| Directory Name | Description |
|---|---|
bin |
Scripts to start and stop DDF |
data |
The working directory of the system – installed bundles and their data |
data/log/ddf.log |
Log file for DDF, logging all errors, warnings, and (optionally) debug statements. This log rolls up to 10 times, frequency based on a configurable setting (default=1 MB) |
deploy |
Hot-deploy directory – KARs and bundles added to this directory will be hot-deployed (Empty upon DDF installation) |
docs |
The DDF Catalog API Javadoc |
etc |
Directory monitored for addition/modification/deletion of third party .cfg configuration files |
etc/ddf |
Directory monitored for addition/modification/deletion of DDF-related .cfg configuration files (e.g., Schematron configuration file) |
etc/templates |
Template .cfg files for use in configuring DDF sources, settings, etc., by copying to the etc/ddf directory. |
lib |
The system’s bootstrap libraries. Includes the ddf-branding.jar file which is used to brand the system console with the DDF logo. |
licenses |
Licensing information related to the system |
system |
Local bundle repository. Contains all of the JARs required by DDF, including third-party JARs. |
Formatting Source Code
A code formatter for the Eclipse IDE that can be used across all DDF projects will allow developers to format code similarly and minimize merge issues in the future.
DDF uses an updated version of the Apache ServiceMix Code Formatter (http://servicemix.apache.org/developers/building.html) for code formatting.
DOWNLOAD THIS FILE: link:ddf-eclipse-code-formatter.xml
-
Follow the link.
-
Right-click on Raw.
-
Select Save As.
Load the Code Formatter Into the Eclipse IDE
-
In Eclipse, select Window → Preferences. The Preferences window opens.
-
Select Java → Code Style → Formatter.
-
Select the Edit… button and load the attached ddf-eclipse-code-formatter.xml file.
-
Select the OK button.
Load the Code Formatter Into IntelliJ IDEA
IntelliJ IDEA 13 is capable of importing Eclipse’s Code Formatter directly from within IntelliJ without the use of any plugins.
-
Open IntelliJ IDEA.
-
Select File → Settings → Code Style → Java.
-
Select Manage.
-
Select the Import button to Import ddf-eclipse-code-formatter.xml file.
Format Your Source Code Using Eclipse
A developer may write code and format it before saving.
-
Before the file is saved, highlight all of the source code in the IDE editor window.
-
Right-click on the highlighted code.
-
Select Source → Format. The code formatter is applied to the source code and the file can be saved.
Set Up Save Actions in Eclipse
A developer can also set up Save Actions to format the source code automatically.
-
Open Eclipse.
-
Select Window → Preferences (Eclipse → Preferences on Mac). The Preferences window opens.
-
Select Java → Editor → Save Actions.
-
Select Perform the selected actions on save.
-
Select Format source code.
-
Select Format all lines or Format edited lines, as necessary.
-
Optionally, select Organize imports (recommended).
-
Select the Apply button.
-
Select the OK button.
Format Source Code Using IntelliJ
In the toolbar, select Code → Reformat Code or use the keyboard shortcut Ctrl-Alt-L.
Ensuring Compatibility
Compatibility Goals
The DDF framework, like all software, will mature over time. Changes will be made to improve efficiency, add features, and fix bugs. To ensure that components built for DDF and its sub-frameworks are compatible, developers must use caution when establishing dependencies from developed components.
Guidelines for Maintaining Compatibility
DDF Framework
For components written at the DDF Framework level (see Developing at the Framework Level), adhere to the following specifications:
| Standard/Specification | Version | Current Implementation (subject to change) |
|---|---|---|
OSGi Framework |
4.2 |
Apache Karaf 2.x |
OSGi Enterprise Specification |
4.2 |
Apache Aries (Blueprint) |
|
Avoid developing dependencies on the implementations directly, as compatibility in future releases is not guaranteed. |
DDF Catalog API
For components written for the DDF Catalog (see Developing Catalog Components), only dependencies on the current major version of the Catalog API should be used. Detailed documentation of the Catalog API can be found in the Catalog API Javadocs.
Dependency |
Version Interval |
Notes |
DDF Catalog API |
[2.0, 3.0) |
Major version will be incremented (to 3.0) if/when compatibility is broken with the 2.x API. |
DDF Software Versioning
DDF follows the Semantic Versioning White Paper for bundle versioning (see Software Versioning).
Third Party and Utility Bundles
It is recommended to avoid building directly on included third party and utility bundles. These components do provide utility (e.g., JScience) and reuse potential; however, they may be upgraded or even replaced at anytime as bug fixes and new capabilities dictate. For example, Web services may be built using CXF. However, the distributions frequently upgrade CXF between releases to take advantage of new features. If building on these components, be aware of the version upgrades with each distribution release.
Instead, component developers should package and deliver their own dependencies to ensure future compatibility. For example, if re-using a bundle like commons-geospatial, the specific bundle version that you are depending on should be included in your packaged release, and the proper versions should be referenced in your bundle(s).
Best Practices
-
Always use a version number when exporting a package. In the following example, the
docsrepresents the project and artifactId of the package being exported. The2.8.2represents the version of the project.
1
2
3
<Export-Package>
docs*;version=2.8.2
</Export-Package>
-
Try to avoid deploying multiple versions of a bundle. Although OSGi is designed to support multiple versions, other developers may not include the versions of the packages that are being imported. If the bundle is versioned and designed appropriately, typically, having multiple versions of the bundle will not be an issue. However, if each bundle is competing for a specific resource, race conditions may occur. Third party and utility bundles (often denoted by commons in the bundle name) are the general exception to this rule, as these bundles will likely function as expected with multiple versions deployed.
Development Recommendations
Javascript
Avoid using console.log
Package Names
Use singular package names.
Author Tags
Author tags are discouraged from being placed in the source code, as they can be a barrier to collaboration and have potential legal ramifications.
Unit Testing
All code should contain unit tests that are able to test out any localized functionality within that class. When working with OSGi, code may have references to various services and other areas that are not available at compile-time. One way to work around the issue of these external dependencies is to use a mocking framework.
|
Recommended Framework
The recommended framework to use with DDF is Mockito: https://github.com/mockito/mockito. This test-level dependency is managed by the ddf-parent pom and is used to standardize the version being used across DDF. |
Logging
There are many logging frameworks available for Java.
|
Recommended Framework
To maintain the best compatibility, the recommended logging framework is Simple Logging Facade for Java (SLF4J) (http://www.slf4j.org/), specifically the slf4j-api. SLF4J allows a very robust logging API while letting the backend implementation be switched out seamlessly. Additionally, it is compatible with pax logging and natively implemented by logback. |
DDF code uses the first five SLF4J log levels:
-
trace (the least serious)
-
debug
-
info
-
warn
-
error (the most serious)
Examples:
1
2
3
4
5
6
7
//Check if trace is enabled before executing expense XML processing
if (LOGGER.isTraceEnabled()) {
LOGGER.trace("XML returned: {}", XMLUtils.toString(xml));
}
//It is not necessary to wrap with LOGGER.isTraceEnabled() since slf4j will not construct the String unless
//trace level is enabled
LOGGER.trace("Executing search: {}", search);
Dependency Injection Frameworks
It is highly recommended to use a dependency injection framework, such as Blueprint, Spring-DM, or iPojo for non-advanced OSGi tasks. Dependency injection frameworks allow for more modularity in code, keep the code’s business logic clean of OSGi implementation details, and take the complexity out of the dynamic nature of OSGi. In OSGi, services can be added and removed at any time, and dependency injection frameworks are better suited to handle these types of situations. Allowing the code to be clean of OSGi packages also makes code easier to reuse outside of OSGi. These frameworks provide code conveniences of service registration, service tracking, configuration property management, and other OSGi core principles.
Basic Security
(Provided by Pierre Parrend (http://www.slideshare.net/kaihackbarth/security-in-osgi-applications-robust-osgi-platforms-secure-bundles))
-
Bundles should
-
Never use synchronized statements that rely on third party code. Keep multi-threaded code in mind when using synchronized statements in general, as they can lead to performance issues.
-
Only have dependencies on bundles that are trusted.
-
-
Shared Code
-
Provide only final static non-mutable fields.
-
Set security manager calls during creation in all required places at the beginning of methods.
-
All Constructors
-
clone() method if a class implements Cloneable
-
readObject(ObjectInputStream) if the class implements Serializable
-
-
Have security check in final methods only.
-
-
Shared Objects (OSGi services)
-
Only have basic types and serializable final types as parameters.
-
Perform copy and validation (e.g., null checks) of parameters prior to using them.
-
Do not use Exception objects that carry any configuration information.
-
OSGi Services
DDF uses resource injection to retrieve and register services to the OSGi registry. There are many resource injection frameworks that are used to complete these operations. Blueprint and Spring DM are both used by DDF. There are many tutorials and guides available on the Internet for both of these frameworks. Refer to the Additional Resources section for details not covered in this guide. Links to some of these guides are given in the External Links section of this page.
Spring DM - Retrieving a Service Instance
1
2
3
4
5
6
7
8
9
10
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:osgi="http://www.springframework.org/schema/osgi">
<osgi:reference id="ddfCatalogFramework" interface="ddf.catalog.CatalogFramework" />
<bean class="my.sample.NiftyEndpoint">
<constructor-arg ref="ddfCatalogFramework" />
</bean>
</beans>
| Line # | Action |
|---|---|
5 |
Retrieves a Service from the Registry |
8 |
Instantiates a new object, injecting the retrieved Service as a constructor argument |
Blueprint - Retrieving a Service Instance
1
2
3
4
5
6
7
8
9
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">
<reference id="ddfCatalogFramework" interface="ddf.catalog.CatalogFramework" />
<bean class="my.sample.NiftyEndpoint" >
<argument ref="ddfCatalogFramework" />
</bean>
</blueprint>
| Line # | Action |
|---|---|
3 |
Retrieves a Service from the Registry |
6 |
Instantiates a new object, injecting the retrieved Service as a constructor argument |
Blueprint - Registering a Service into the Registry
1
2
3
4
5
6
7
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">
<bean id="transformer" class="my.sample.NiftyTransformer"/>
<service ref="transformer" interface="ddf.catalog.transform.QueryResponseTransformer" />
</blueprint>
| Line # | Action |
|---|---|
3 |
Instantiates a new object |
5 |
Registers the object instance created in Line 3 as a service that implements the |
Packaging Capabilities as Bundles
Services and code are physically deployed to DDF using bundles. The bundles within DDF are created using the maven bundle plug-in. Bundles are Java JAR files that have additional metadata in the MANIFEST.MF that is relevant to an OSGi container. Alternative Bundle Creation Methods
|
Using Maven is not necessary to create bundles. Alternative tools exist, and OSGi manifest files can also be created by hand, although hand editing should be avoided by most developers. |
See external links (below) for resources that give in-depth guides on creating bundles.
Creating a Bundle
Bundle Development Recommendations
-
Avoid creating bundles by hand or editing a manifest file. Many tools exist for creating bundles, notably the Maven Bundle plugin, which handle the details of OSGi configuration and automate the bundling process including generation of the manifest file.
-
Always make a distinction on which imported packages are optional or required. Requiring every package when not necessary can cause an unnecessary dependency ripple effect among bundles.
Maven Bundle Plugin
Below is a code snippet from a Maven pom.xml for creating an OSGi Bundle using the Maven Bundle plugin.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
...
<packaging>bundle</packaging>
...
<build>
...
<plugin>
<groupId>org.apache.felix</groupId>
<artifactId>maven-bundle-plugin</artifactId>
<configuration>
<instructions>
<Bundle-Name>DDF DOCS</Bundle-Name>
<Export-Package />
<Bundle-SymbolicName>ddf.docs</Bundle-SymbolicName>
<Import-Package>
ddf.catalog,
ddf.catalog.*
</Import-Package>
</instructions>
</configuration>
</plugin>
...
</build>
...
Deploying a Bundle
A bundle is typically installed in two ways:
-
As a feature
-
Hot deployed in the /deploy directory
The fastest way to deploy a created bundle during development is to copy it to the /deploy directory of a running DDF. This directory checks for new bundles and deploys them immediately. According to Karaf documentation, "Karaf supports hot deployment of OSGi bundles by monitoring JAR files inside the [home]/deploy directory. Each time a JAR is copied in this folder, it will be installed inside the runtime. It can be updated or deleted and changes will be handled automatically. In addition, Karaf also supports exploded bundles and custom deployers (Blueprint and Spring DM are included by default)." Once deployed, the bundle should come up in the Active state if all of the dependencies were properly met. When this occurs, the service is available to be used.
Verifying Bundle State
To verify if a bundle is deployed and running, go to the running command console and view the status.
-
Execute the list command.
-
If the name of the bundle is known, the
listcommand can be piped to thegrepcommand to quickly find the bundle.
The example below shows how to verify if the CAB Client is deployed and running.
ddf@local>list | grep -i cab [ 162] [Active ] [ ] [ ] [ 80] DDF :: Registry :: CAB Client (2.0.0)
The state is Active, indicating that the bundle is ready for program execution.
Additional Resources
-
Blueprint
-
Spring DM
-
Lessons Learned from it-agile (PDF)
-
http://www.martinlippert.org/events/OOP2010-OSGiLessonsLearned.pdf
-
Creating Bundles
-
Bundle States
UI Development Recommendations
Recommendations for developing UI components.
"White Box" DDF Architecture
Architecture Diagram
As depicted in the architectural diagram above, DDF runs on top of an OSGi framework, a Java virtual machine (JVM), several choices of operating systems, and the physical hardware infrastructure. The items within the dotted line represent the DDF out-of-the-box.
DDF is a customized and branded distribution of Apache Karaf. DDF could also be considered to be a more lightweight OSGi distribution, as compared to Apache ServiceMix, FUSE ESB, or Talend ESB, all of which are also built upon Apache Karaf. Similar to its peers, DDF incorporates additional upstream dependencies (https://tools.codice.org/#DDFArchitecture-AdditionalUpstreamDependencies).
DDF as a framework hosts DDF applications, which are extensible by adding components via OSGi. The best example of this is the DDF Catalog (API), which offers extensibility via several types of Catalog Components. The DDF Catalog API serves as the foundation for several applications and resides in the applications tier.
The Catalog Components consist of Endpoints, Plugins, Catalog Frameworks, Sources, and Catalog Providers. Customized components can be added to DDF.
Nomenclature
- Capability
-
A general term used to refer to an ability of the system
- Application
-
One or more features that together form a cohesive collection of capabilities
- Component
-
Represents a portion of an Application that can be extended
- Bundle
-
Java Archives (JARs) with special OSGi manifest entries.
- Feature
-
One or more bundles that form an installable unit; Defined by Apache Karaf but portable to other OSGi containers.
OSGi Core
DDF makes use of OSGi v4.2 to provide several capabilities:
-
Has a Microkernel-based foundation, which is lightweight due to its origin in embedded systems.
-
Enables integrators to easily customize components to run on their system.
-
Software applications are deployed as OSGi components, or bundles. Bundles are modules that can be deployed into the OSGi container (Eclipse Equinox OSGi Framework by default).
-
Bundles provide flexibility allowing integrators to choose the bundles that meet their mission needs.
-
Bundles provide reusable modules that can be dropped in any container.
-
Provides modularity, module-based security, and low-level services, such as Hypertext Transfer Protocol (HTTP), logging, events (basic publish/subscribe), and dependency injection.
-
Implements a dynamic component model that allows application updates without downtime. Components can be added or updated in a running system.
-
Standardized Application Configuration (ConfigurationAdmin and MetaType)
OSGi is not an acronym, but if more context is desired the name Open Specifications Group Initiative has been suggested.
More information on OSGi is available at http://www.osgi.org/.
Built on Apache Karaf
Apache Karaf is a FOSS product that includes an OSGi framework and adds extra functionality, including:
- Web Administration Console
-
Useful for configuring bundles, installing/uninstalling features, and viewing services.
- System Console
-
Provides command line administration of the OSGi container. All functionality in the Web Administration Console can also be performed via this command line console.
- Logging
-
Provides centralized logging to a single log file (data/logs/ddf.log) utilizing log4j.
- Provisioning
-
Of libraries or applications.
- Security
-
Provides a security framework based on Java Authentication and Authorization Service (JAAS).
- Deployer
-
Provides hot deployment of new bundles by dropping them into the <INSTALL_DIR>/deploy directory.
- Blueprint
-
Provides an implementation of the OSGi Blueprint Container specification that defines a dependency injection framework for dealing with dynamic configuration of OSGi services.
-
DDF uses the Apache Aries implementation of Blueprint. More information can be found at http://aries.apache.org/modules/blueprint.htm.
-
- Spring DM
-
An alternative dependency injection framework. DDF is not dependent on specific dependency injection framework. Blueprint is recommended.
Additional Upstream Dependencies
DDF is a customized distribution of Apache Karaf, and therefore includes all the capabilities of Apache Karaf. DDF also includes additional FOSS components to provide a richer set of capabilities. Integrated components include their own dependencies, but at the platform level, DDF includes the following upstream dependencies:
- Apache CXF
-
Apache CXF is an open source services framework. CXF helps build and develop services using front end programming APIs, such as JAX-WS and JAX-RS. More information can be found at http://cxf.apache.org.
- Apache Commons
-
Provides a set of reusable Java components that extends functionality beyond that provided by the standard JDK (More info available at http://commons.apache.org)
- OSGeo GeoTools
-
Provides spatial object model and fundamental geometric functions, which are used by DDF spatial criteria searches. More information can be found at http://geotools.org/.
- Joda Time
-
Provides an enhanced, easier to use version of Java date and time classes. More information can be found at http://joda-time.sourceforge.net.
For a full list of dependencies, refer to the Software Version Description Document (SVDD).
Recommended Hardware
Because of its modular nature, DDF may require a few or many system resources, depending on which bundles and features are deployed. In general, DDF will take advantage of available memory and processors. A 64-bit JVM is required, and a typical installation is performed on a single machine with 16GB of memory and eight processor cores.
Web Service Security Architecture
The Web Service Security (WSS) functionality that comes with DDF is integrated throughout the system. This is a central resource describing how all of the pieces work together and where they are located within the system.
DDF comes with a Security Framework and Security Services. The Security Framework is the set of APIs that define the integration with the DDF framework and the Security Services are the reference implementations of those APIs built for a realistic end-to-end use case.
Security Framework
The DDF Security Framework utilizes Apache Shiro as the underlying security framework. The classes mentioned in this section will have their full package name listed, to make it easy to tell which classes come with the core Shiro framework and which are added by DDF.
Subject
ddf.security.Subject <extends> org.apache.shiro.subject.Subject
The Subject is the key object in the security framework. Most of the workflow and implementations revolve around creating and using a Subject. The Subject object in DDF is a class that encapsulates all information about the user performing the current operation. The Subject can also be used to perform permission checks to see if the calling user has acceptable permission to perform a certain action (e.g., calling a service or returning a metacard). This class was made DDF-specific because the Shiro interface cannot be added to the Query Request property map.
Implementations of Subject:
| Classname | Description |
|---|---|
ddf.security.impl.SubjectImpl |
Extends org.apache.shiro.subject.support.DelegatingSubject |
Security Manager
ddf.security.service.SecurityManager
The Security Manager is a service that handles the creation of Subject objects. A proxy to this service should be obtained by an endpoint to create a Subject and add it to the outgoing QueryRequest. The Shiro framework relies on creating the subject by obtaining it from the current thread. Due to the multi-threaded and stateless nature of the DDF framework, utilizing the Security Manager interface makes retrieving Subjects easier and safer.
Implementations of Security Managers:
| Classname | Description |
|---|---|
ddf.security.service.SecurityManagerImpl |
This implementation of the Security Manager handles taking in both |
Authentication Tokens
org.apache.shiro.authc.AuthenticationToken
Authentication Tokens are used to verify authentication of a user when creating a subject. A common use-case is when a user is logging directly in to the DDF framework.
| Classname | Description |
|---|---|
ddf.security.service.impl.cas.CasAuthenticationToken |
This Authentication Token is used for authenticating a user that has logged in with CAS. It takes in a proxy ticket which can be validated on the CAS server. |
Realms
Authenticating Realms
org.apache.shiro.realm.AuthenticatingRealm
Authenticating Realms are used to authenticate an incoming authentication token and create a Subject on successfully authentication.
Implementations of Authenticating Realms that come with DDF:
| Classname | Description |
|---|---|
ddf.security.realm.sts.StsRealm |
This realm delegates authentication to the Secure Token Service (STS). It creates a RequestSecurityToken message from the incoming Authentication Token and converts a successful STS response into a Subject. |
Authorizing Realms
org.apache.shiro.realm.AuthorizingRealm
Authorizing Realms are used to perform authorization on the current Subject. These are used when performing both Service AuthZ and Filtering. They are passed in the AuthorizationInfo of the Subject along with the Permissions of the object wanting to be accessed. The response from these realms is a true (if the Subject has permission to access) or false (if the Subject does not).
Other implementations of the Security API that come with DDF:
| Classname | Description |
|---|---|
org.codice.ddf.platform.filter.delegate.DelegateServletFilter |
The DelegateServletFilter detects any servlet filters that have been exposed as OSGi services and places them in-order in front of any servlet or web application running on the container. |
org.codice.ddf.security.filter.websso.WebSSOFilter |
This filter serves as the main security filter that works in conjunction with a number of handlers to protect a variety of contexts, each using different authentication schemes and policies. |
org.codice.ddf.security.handler.saml.SAMLAssertionHandler |
This handler is executed by the WebSSOFilter for any contexts configured to use it. This handler should always come first when configured in the Web Context Policy Manager, as it provides a caching capability to web contexts that use it. The handler will first check for the existence of a cookie named "org.codice.websso.saml.token" to extract a Base64 + deflate SAML assertion from the request. If an assertion is found it will be converted to a SecurityToken. Failing that, the handler will check for a JSESSIONID cookie that might relate to a current SSO session with the container. If the JSESSIONID is valid, the SecurityToken will be retrieved from the cache in the LoginFilter. |
org.codice.ddf.security.handler.basic.BasicAuthenticationHandler |
Checks for basic authentication credentials in the http request header. If they exist, they are retrieved and passed to the LoginFilter for exchange. |
org.codice.ddf.security.handler.pki.PKIHandler |
Handler for PKI based authentication. X509 chain will be extracted from the HTTP request and converted to a BinarySecurityToken. |
org.codice.ddf.security.handler.anonymous.AnonymousHandler |
Handler that allows anonymous user access via a guest user account. The guest account credentials are configured via the org.codice.ddf.security.claims.anonymous.AnonymousClaimsHandler. The AnonymousHandler also checks for the existence of basic auth credentials or PKI credentials that might be able to override the use of the anonymous user. |
org.codice.ddf.security.filter.login.LoginFilter |
This filter runs immediately after the WebSSOFilter and exchanges any authentication information found in the request with a Subject via Shiro. |
org.codice.ddf.security.filter.authorization.AuthorizationFilter |
This filter runs immediately after the LoginFilter and checks any permissions assigned to the web context against the attributes of the user via Shiro. |
org.apache.shiro.realm.AuthenticatingRealm |
This is an abstract authenticating realm that exchanges an org.apache.shiro.authc.AuthenticationToken for a ddf.security.Subject in the form of an org.apache.shiro.authc.AuthenticationInfo |
ddf.security.realm.sts.StsRealm |
This realm is an implementation of org.apache.shiro.realm.AuthenticatingRealm and connects to an STS (configurable) to exchange the authentication token for a Subject. |
ddf.security.service.AbstractAuthorizingRealm |
This is an abstract authorizing realm that takes care of caching and parsing the Subject’s AuthorizingInfo and should be extended to allow the implementing realm focus on making the decision. |
ddf.security.pdp.realm.XACMLRealm |
This realm delegates the authorization decision to a XACML-based Policy Decision Point (PDP) backend. It creates a XACML 3.0 request and looks on the OSGi framework for any service implementing ddf.security.pdp.api.PolicyDecisionPoint. |
ddf.security.pdp.realm.SimpleAuthZRealm |
This realm performs the authorization decision without delegating to an external service. It uses the incoming permissions to create a decision. However, it is possible to extend this realm using the ddf.security.policy.extension.PolicyExtension interface. This interface allows an integrator to add additional policy information to the PDP that can’t be covered via its generic matching policies. This approach is often easier to configure for those that are not familiar with XACML. Note that no PolicyExtension implementations are provided out of the box. |
org.codice.ddf.security.validator.* |
A number of STS validators are provided for X.509 (BinarySecurityToken), UsernameToken, SAML Assertion, and DDF custom tokens. The DDF custom tokens are all BinarySecurityTokens that may have PKI or username/password information as well as an authentication realm (correlates to JAAS realms installed in the container). The authentication realm allows an administrator to restrict which services they wish to use to authenticate users. For example: installing the security-sts-ldaplogin feature will enable a JAAS realm with the name "ldap". This realm can then be specified on any context using the Web Context Policy Manager. That realm selection is then passed via the token sent to the STS to determine which validator to use. |
|
An update was made to the SAML Assertion Handler to pass SAML assertions via headers instead of cookies. Cookies are still accepted and processed to maintain legacy federation compatibility, but only headers are used when federating out. This means that it is still possible to federate and pass a machine’s identity, but federation of a user’s identity will ONLY work when federating from 2.7.x to 2.8.x+ or between 2.8.x+ and 2.8.x+. |
Securing REST
The delegating servlet filter is topmost filter for all web contexts. It loads in all security filters. The first filter used is the Web SSO filter. It reads from the web context policy manager and functions as the first decision point. If the request is from a whitelisted context, no further authentication is needed and the request goes directly to the desired endpoint. If the context is not on the whitelist, the filter will attempt to get a handler for the context. The filter loops through all configured context handlers until one signals that it has found authentication information that it can use to build a token. This configuration can be changed by modifying the web context policy manager configuration. If unable to resolve the context, the filter will return an authentication error and the process stops. If a handler is successfully found, an auth token is assigned and the request continues to the login filter. The Login Filter receives a token and return a subject. To retrieve the subject, the token is sent through Shiro to the STS Realm where the token will be exchanged for a SAML assertion through a SOAP call to an STS server. If the Subject is returned, the request moves to the Authorization Filter to check permissions on the user. If the user has the correct permissions to access that web context, the request is allowed to hit the endpoint.
Encryption Service
The encryption service and encryption command, which are based on Jasypt, provide an easy way for developers to add encryption capabilities to DDF.
Encryption Command
An encrypt security command is provided with DDF that allows plain text to be encrypted. This is useful when displaying password fields in a GUI.
Below is an example of the security:encrypt command used to encrypt the plain text "myPasswordToEncrypt". The output, bR9mJpDVo8bTRwqGwIFxHJ5yFJzatKwjXjIo/8USWm8=, is the encrypted value.
ddf@local>security:encrypt myPasswordToEncrypt bR9mJpDVo8bTRwqGwIFxHJ5yFJzatKwjXjIo/8USWm8=
Filtering
Metacard filtering is performed in a Post Query plugin that occurs after a query has been performed.
How Filtering Works
Each metacard result will contain security attributes that are pulled from the metadata record after being processed by a PostQueryPlugin (Not provided! You must create your own plugin for your specific metadata!) that populates this attribute. The security attribute is a HashMap containing a set of keys that map to lists of values. The metacard is then processed by a filter plugin that creates a KeyValueCollectionPermission from the metacard’s security attribute. This permission is then checked against the user subject to determine if the subject has the correct claims to view that metacard. The decision to filter the metacard eventually relies on the installed PDP (features:install security-pdp-java OR features:install security-pdp-xacml). The PDP that is being used returns a decision, and the metacard will either be filtered or allowed to pass through.
The security attributes populated on the metacard are completely dependent on the type of the metacard. Each type of metacard must have its own PostQueryPlugin that reads the metadata being returned and populates the metacard’s security attribute. If the subject permissions are missing during filtering, all resources will be filtered.
Example (represented as simple XML for ease of understanding):
1
2
3
4
5
6
7
8
9
10
<metacard>
<security>
<map>
<entry key="entry1" value="A,B" />
<entry key="entry2" value="X,Y" />
<entry key="entry3" value="USA,GBR" />
<entry key="entry4" value="USA,AUS" />
</map>
</security>
</metacard>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<user>
<claim name="claim1">
<value>A</value>
<value>B</value>
</claim>
<claim name="claim2">
<value>X</value>
<value>Y</value>
</claim>
<claim name="claim3">
<value>USA</value>
</claim>
<claim name="claim4">
<value>USA</value>
</claim>
</user>
In the above example, the user’s claims are represented very simply and are similar to how they would actually appear in a SAML 2 assertion. Each of these user (or subject) claims will be converted to a KeyValuePermission object. These permission objects will be implied against the permission object generated from the metacard record. In this particular case, the metacard might be allowed if the policy is configured appropriately because all of the permissions line up correctly.
Filter Policies
The procedure for setting up a policy differs depending on which PDP implementation installed. The security-pdp-java implementation is the simplest PDP to use, so it will be covered here.
-
Open the https://localhost:8993/system/console/configuration.*
-
Click on the Authz Security Settings configuration.
-
Add any roles that are allowed to access protected services.
-
Add any SOAP actions that are not to be protected by the PDP.
-
Add any attribute mappings necessary to map between subject claims and metacard values.
-
For example, the above example would require two Match All mappings of claim1=entry1 and claim2=entry2
-
Match One mappings would contain claim3=entry3 and claim4=entry4.
-
|
See the Security PDP AuthZ Realm (Java PDP) section of this documentation for a description of the configuration page. |
With the security-pdp-java feature configured in this way, the above Metacard would be displayed to the user.
The XACML PDP is explained in more detail in the XACML Policy Decision Point (PDP) section of this documentation. It the administrator’s responsibility to write a XACML policy capable of returning the correct response message. The Java-based PDP should perform adequately in most situations. It is possible to install the security-pdp-java and security-pdp-xacml features at the same time. The system could be configured in this way in order to allow the Java PDP to handle most cases and only have XACML policies to handle more complex situations than what the Java PDP is designed for. Keep in mind that this would be a very complex configuration with both PDPs installed, and this should only be performed if you understand the complex details.
Filter a New Type of Metacard
To enable filtering on a new type of record, implement a PostQueryPlugin that is able to read the string metadata contained within the metacard record. The plugin must set the security attribute to a map of list of values extracted from the metacard. Note that in DDF, there is no default plugin that populates the security attribute on the metacard. A plugin must be created to populate these fields in order for filtering to work correctly.
Security Token Service
The Security Token Service (STS) is a service running in DDF that allows clients to request SAML v2.0 assertions. These assertions are then used to authenticate a client allowing them to issue other requests, such as ingests or queries to DDF services.
The STS is an extension of Apache CXF-STS. It is a SOAP web service that utilizes WS-Security policies. The generated SAML assertions contain attributes about a user and is used by the Policy Enforcement Point (PEP) in the secure endpoints. Specific configuration details on the bundles that come with DDF can be found on the Security STS application page. This page details all of the STS components that come out of the box with DDF, along with configuration options, installation help, and which services they import and export.
The STS server contains validators, claim handlers, and token issuers to process incoming requests. When a request is received, the validators first ensure that it is valid. The validators verifies authentication against configured services, such as LDAP, DIAS, PKI. If the request is found to be invalid, the process ends and an error is returned. Next, the claims handlers determine how to handle the request, adding user attributes or properties as configured. The token issuer creates a SAML 2.0 assertion and associates it with the subject. The STS server sends an assertion back to the requestor, which is used in both SOAP and REST cases.
Using the Security Token Service (STS)
Once installed, the STS can be used to request SAML v2.0 assertions via a SOAP web service request. Out of the box, the STS supports authentication from existing SAML tokens, CAS proxy tickets, username/password, and x509 certificates. It also supports retrieving claims using LDAP.
Standalone Installation
The STS cannot currently be installed on a kernel distribution of DDF. To run a STS-only DDF installation, uninstall the catalog components that are not being used. The following list displays the features that can be uninstalled to minimize the runtime size of DDF in an STS-only mode. This list is not a comprehensive list of every feature that can be uninstalled; it is a list of the larger components that can be uninstalled without impacting the STS functionality.
-
catalog-core-standardframework
-
catalog-solr-embedded-provider
-
catalog-opensearch-endpoint
-
catalog-opensearch-souce
-
catalog-rest-endpoint
STS Claims Handlers
Claims handlers are classes that convert the incoming user credentials into a set of attribute claims that will be populated in the SAML assertion. An example in action would be the LDAPClaimsHandler that takes in the user’s credentials and retrieves the user’s attributes from a backend LDAP server. These attributes are then mapped and added to the SAML assertion being created. Integrators and developers can add more claims handlers that can handle other types of external services that store user attributes.
Add a Custom Claims Handler
Description
A claim is an additional piece of data about a subject that can be included in a token along with basic token data. A claims manager provides hooks for a developer to plug in claims handlers to ensure that the STS includes the specified claims in the issued token.
Motivation
A developer may want to add a custom claims handler to retrieve attributes from an external attribute store.
Steps
The following steps define the procedure for adding a custom claims handler to the STS.
-
The new claims handler must implement the org.apache.cxf.sts.claims.ClaimsHander interface.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34
/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.cxf.sts.claims; import java.net.URI; import java.util.List; /** * This interface provides a pluggable way to handle Claims. */ public interface ClaimsHandler { List<URI> getSupportedClaimTypes(); ClaimCollection retrieveClaimValues(RequestClaimCollection claims, ClaimsParameters parameters); }
-
Expose the new claims handler as an OSGi service under the
org.apache.cxf.sts.claims.ClaimsHandlerinterface.1 2 3 4 5 6 7 8
<?xml version="1.0" encoding="UTF-8"?> <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"> <bean id="CustomClaimsHandler" class="security.sts.claimsHandler.CustomClaimsHandler" /> <service ref="customClaimsHandler" interface="org.apache.cxf.sts.claims.ClaimsHandler"/> </blueprint>
-
Deploy the bundle.
If the new claims handler is hitting an external service that is secured with SSL, a developer may have to add the root CA of the external site to the DDF trustStore and add a valid certificate into the DDF keyStore. Doing so will allow the SSL to encrypt messages that will be accepted by the external service. For more information on certificates, refer to the Configuring a Java Keystore for Secure Communications page.
STS WS-Trust WSDL Document
|
This XML file is found inside of the STS bundle and is named ws-trust-1.4-service.wsdl. |
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
<?xml version="1.0" encoding="UTF-8"?>
<wsdl:definitions xmlns:tns="http://docs.oasis-open.org/ws-sx/ws-trust/200512/" xmlns:wstrust="http://docs.oasis-open.org/ws-sx/ws-trust/200512/" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:wsap10="http://www.w3.org/2006/05/addressing/wsdl" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsp="http://www.w3.org/ns/ws-policy" xmlns:wst="http://docs.oasis-open.org/ws-sx/ws-trust/200512" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:wsam="http://www.w3.org/2007/05/addressing/metadata" targetNamespace="http://docs.oasis-open.org/ws-sx/ws-trust/200512/">
<wsdl:types>
<xs:schema elementFormDefault="qualified" targetNamespace="http://docs.oasis-open.org/ws-sx/ws-trust/200512">
<xs:element name="RequestSecurityToken" type="wst:AbstractRequestSecurityTokenType"/>
<xs:element name="RequestSecurityTokenResponse" type="wst:AbstractRequestSecurityTokenType"/>
<xs:complexType name="AbstractRequestSecurityTokenType">
<xs:sequence>
<xs:any namespace="##any" processContents="lax" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
<xs:attribute name="Context" type="xs:anyURI" use="optional"/>
<xs:anyAttribute namespace="##other" processContents="lax"/>
</xs:complexType>
<xs:element name="RequestSecurityTokenCollection" type="wst:RequestSecurityTokenCollectionType"/>
<xs:complexType name="RequestSecurityTokenCollectionType">
<xs:sequence>
<xs:element name="RequestSecurityToken" type="wst:AbstractRequestSecurityTokenType" minOccurs="2" maxOccurs="unbounded"/>
</xs:sequence>
</xs:complexType>
<xs:element name="RequestSecurityTokenResponseCollection" type="wst:RequestSecurityTokenResponseCollectionType"/>
<xs:complexType name="RequestSecurityTokenResponseCollectionType">
<xs:sequence>
<xs:element ref="wst:RequestSecurityTokenResponse" minOccurs="1" maxOccurs="unbounded"/>
</xs:sequence>
<xs:anyAttribute namespace="##other" processContents="lax"/>
</xs:complexType>
</xs:schema>
</wsdl:types>
<!-- WS-Trust defines the following GEDs -->
<wsdl:message name="RequestSecurityTokenMsg">
<wsdl:part name="request" element="wst:RequestSecurityToken"/>
</wsdl:message>
<wsdl:message name="RequestSecurityTokenResponseMsg">
<wsdl:part name="response" element="wst:RequestSecurityTokenResponse"/>
</wsdl:message>
<wsdl:message name="RequestSecurityTokenCollectionMsg">
<wsdl:part name="requestCollection" element="wst:RequestSecurityTokenCollection"/>
</wsdl:message>
<wsdl:message name="RequestSecurityTokenResponseCollectionMsg">
<wsdl:part name="responseCollection" element="wst:RequestSecurityTokenResponseCollection"/>
</wsdl:message>
<!-- This portType an example of a Requestor (or other) endpoint that
Accepts SOAP-based challenges from a Security Token Service -->
<wsdl:portType name="WSSecurityRequestor">
<wsdl:operation name="Challenge">
<wsdl:input message="tns:RequestSecurityTokenResponseMsg"/>
<wsdl:output message="tns:RequestSecurityTokenResponseMsg"/>
</wsdl:operation>
</wsdl:portType>
<!-- This portType is an example of an STS supporting full protocol -->
<wsdl:portType name="STS">
<wsdl:operation name="Cancel">
<wsdl:input wsam:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Cancel" message="tns:RequestSecurityTokenMsg"/>
<wsdl:output wsam:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTR/CancelFinal" message="tns:RequestSecurityTokenResponseMsg"/>
</wsdl:operation>
<wsdl:operation name="Issue">
<wsdl:input wsam:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Issue" message="tns:RequestSecurityTokenMsg"/>
<wsdl:output wsam:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTRC/IssueFinal" message="tns:RequestSecurityTokenResponseCollectionMsg"/>
</wsdl:operation>
<wsdl:operation name="Renew">
<wsdl:input wsam:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Renew" message="tns:RequestSecurityTokenMsg"/>
<wsdl:output wsam:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTR/RenewFinal" message="tns:RequestSecurityTokenResponseMsg"/>
</wsdl:operation>
<wsdl:operation name="Validate">
<wsdl:input wsam:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Validate" message="tns:RequestSecurityTokenMsg"/>
<wsdl:output wsam:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTR/ValidateFinal" message="tns:RequestSecurityTokenResponseMsg"/>
</wsdl:operation>
<wsdl:operation name="KeyExchangeToken">
<wsdl:input wsam:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/KET" message="tns:RequestSecurityTokenMsg"/>
<wsdl:output wsam:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTR/KETFinal" message="tns:RequestSecurityTokenResponseMsg"/>
</wsdl:operation>
<wsdl:operation name="RequestCollection">
<wsdl:input message="tns:RequestSecurityTokenCollectionMsg"/>
<wsdl:output message="tns:RequestSecurityTokenResponseCollectionMsg"/>
</wsdl:operation>
</wsdl:portType>
<!-- This portType is an example of an endpoint that accepts
Unsolicited RequestSecurityTokenResponse messages -->
<wsdl:portType name="SecurityTokenResponseService">
<wsdl:operation name="RequestSecurityTokenResponse">
<wsdl:input message="tns:RequestSecurityTokenResponseMsg"/>
</wsdl:operation>
</wsdl:portType>
<wsdl:binding name="STS_Binding" type="wstrust:STS">
<wsp:PolicyReference URI="#STS_policy"/>
<soap:binding style="document" transport="http://schemas.xmlsoap.org/soap/http"/>
<wsdl:operation name="Issue">
<soap:operation soapAction="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Issue"/>
<wsdl:input>
<soap:body use="literal"/>
</wsdl:input>
<wsdl:output>
<soap:body use="literal"/>
</wsdl:output>
</wsdl:operation>
<wsdl:operation name="Validate">
<soap:operation soapAction="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Validate"/>
<wsdl:input>
<soap:body use="literal"/>
</wsdl:input>
<wsdl:output>
<soap:body use="literal"/>
</wsdl:output>
</wsdl:operation>
<wsdl:operation name="Cancel">
<soap:operation soapAction="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Cancel"/>
<wsdl:input>
<soap:body use="literal"/>
</wsdl:input>
<wsdl:output>
<soap:body use="literal"/>
</wsdl:output>
</wsdl:operation>
<wsdl:operation name="Renew">
<soap:operation soapAction="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Renew"/>
<wsdl:input>
<soap:body use="literal"/>
</wsdl:input>
<wsdl:output>
<soap:body use="literal"/>
</wsdl:output>
</wsdl:operation>
<wsdl:operation name="KeyExchangeToken">
<soap:operation soapAction="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/KeyExchangeToken"/>
<wsdl:input>
<soap:body use="literal"/>
</wsdl:input>
<wsdl:output>
<soap:body use="literal"/>
</wsdl:output>
</wsdl:operation>
<wsdl:operation name="RequestCollection">
<soap:operation soapAction="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/RequestCollection"/>
<wsdl:input>
<soap:body use="literal"/>
</wsdl:input>
<wsdl:output>
<soap:body use="literal"/>
</wsdl:output>
</wsdl:operation>
</wsdl:binding>
<wsp:Policy wsu:Id="STS_policy">
<wsp:ExactlyOne>
<wsp:All>
<wsap10:UsingAddressing/>
<wsp:ExactlyOne>
<sp:TransportBinding xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702">
<wsp:Policy>
<sp:TransportToken>
<wsp:Policy>
<sp:HttpsToken>
<wsp:Policy/>
</sp:HttpsToken>
</wsp:Policy>
</sp:TransportToken>
<sp:AlgorithmSuite>
<wsp:Policy>
<sp:Basic128/>
</wsp:Policy>
</sp:AlgorithmSuite>
<sp:Layout>
<wsp:Policy>
<sp:Lax/>
</wsp:Policy>
</sp:Layout>
<sp:IncludeTimestamp/>
</wsp:Policy>
</sp:TransportBinding>
</wsp:ExactlyOne>
<sp:Wss11 xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702">
<wsp:Policy>
<sp:MustSupportRefKeyIdentifier/>
<sp:MustSupportRefIssuerSerial/>
<sp:MustSupportRefThumbprint/>
<sp:MustSupportRefEncryptedKey/>
</wsp:Policy>
</sp:Wss11>
<sp:Trust13 xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702">
<wsp:Policy>
<sp:MustSupportIssuedTokens/>
<sp:RequireClientEntropy/>
<sp:RequireServerEntropy/>
</wsp:Policy>
</sp:Trust13>
</wsp:All>
</wsp:ExactlyOne>
</wsp:Policy>
<wsp:Policy wsu:Id="Input_policy">
<wsp:ExactlyOne>
<wsp:All>
<sp:SignedParts xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702">
<sp:Body/>
<sp:Header Name="To" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="From" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="FaultTo" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="ReplyTo" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="MessageID" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="RelatesTo" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="Action" Namespace="http://www.w3.org/2005/08/addressing"/>
</sp:SignedParts>
<sp:EncryptedParts xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702">
<sp:Body/>
</sp:EncryptedParts>
</wsp:All>
</wsp:ExactlyOne>
</wsp:Policy>
<wsp:Policy wsu:Id="Output_policy">
<wsp:ExactlyOne>
<wsp:All>
<sp:SignedParts xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702">
<sp:Body/>
<sp:Header Name="To" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="From" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="FaultTo" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="ReplyTo" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="MessageID" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="RelatesTo" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="Action" Namespace="http://www.w3.org/2005/08/addressing"/>
</sp:SignedParts>
<sp:EncryptedParts xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702">
<sp:Body/>
</sp:EncryptedParts>
</wsp:All>
</wsp:ExactlyOne>
</wsp:Policy>
<wsdl:service name="SecurityTokenService">
<wsdl:port name="STS_Port" binding="tns:STS_Binding">
<soap:address location="http://localhost:8181/services/SecurityTokenService"/>
</wsdl:port>
</wsdl:service>
</wsdl:definitions>
Example Request and Responses for a SAML Assertion
A client performs a RequestSecurityToken operation against the STS to receive a SAML assertion. The DDF STS offers several different ways to request a SAML assertion. For help in understanding the various request and response formats, samples have been provided. The samples are divided out into different request token types.
Most endpoints that have been used in DDF require the X.509 PublicKey SAML assertion.
BinarySecurityToken (CAS) SAML Security Token Request/Response
BinarySecurityToken (CAS) Sample Request/Response
Request
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Header>
<Action xmlns="http://www.w3.org/2005/08/addressing">http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Issue</Action>
<MessageID xmlns="http://www.w3.org/2005/08/addressing">urn:uuid:60652909-faca-4e4a-a4a7-8a5ce243a7cb</MessageID>
<To xmlns="http://www.w3.org/2005/08/addressing">https://server:8993/services/SecurityTokenService</To>
<ReplyTo xmlns="http://www.w3.org/2005/08/addressing">
<Address>http://www.w3.org/2005/08/addressing/anonymous</Address>
</ReplyTo>
<wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" soap:mustUnderstand="1">
<wsu:Timestamp wsu:Id="TS-1">
<wsu:Created>2013-04-29T18:35:10.688Z</wsu:Created>
<wsu:Expires>2013-04-29T18:40:10.688Z</wsu:Expires>
</wsu:Timestamp>
</wsse:Security>
</soap:Header>
<soap:Body>
<wst:RequestSecurityToken xmlns:wst="http://docs.oasis-open.org/ws-sx/ws-trust/200512">
<wst:RequestType>http://docs.oasis-open.org/ws-sx/ws-trust/200512/Issue</wst:RequestType>
<wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy">
<wsa:EndpointReference xmlns:wsa="http://www.w3.org/2005/08/addressing">
<wsa:Address>https://server:8993/services/SecurityTokenService</wsa:Address>
</wsa:EndpointReference>
</wsp:AppliesTo>
<wst:Claims xmlns:ic="http://schemas.xmlsoap.org/ws/2005/05/identity" xmlns:wst="http://docs.oasis-open.org/ws-sx/ws-trust/200512" Dialect="http://schemas.xmlsoap.org/ws/2005/05/identity">
<ic:ClaimType xmlns:ic="http://schemas.xmlsoap.org/ws/2005/05/identity" Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier"/>
<ic:ClaimType xmlns:ic="http://schemas.xmlsoap.org/ws/2005/05/identity" Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress"/>
<ic:ClaimType xmlns:ic="http://schemas.xmlsoap.org/ws/2005/05/identity" Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname"/>
<ic:ClaimType xmlns:ic="http://schemas.xmlsoap.org/ws/2005/05/identity" Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname"/>
<ic:ClaimType xmlns:ic="http://schemas.xmlsoap.org/ws/2005/05/identity" Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role"/>
</wst:Claims>
<wst:OnBehalfOf>
<BinarySecurityToken ValueType="#CAS" EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" ns1:Id="CAS" xmlns="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:ns1="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">U1QtMTQtYUtmcDYxcFRtS0FxZG1pVDMzOWMtY2FzfGh0dHBzOi8vdG9rZW5pc3N1ZXI6ODk5My9zZXJ2aWNlcy9TZWN1cml0eVRva2VuU2VydmljZQ==</BinarySecurityToken>
</wst:OnBehalfOf>
<wst:TokenType>http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0</wst:TokenType>
<wst:KeyType>http://docs.oasis-open.org/ws-sx/ws-trust/200512/PublicKey</wst:KeyType>
<wst:UseKey>
<ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:X509Data>
<ds:X509Certificate>
MIIC5DCCAk2gAwIBAgIJAKj7ROPHjo1yMA0GCSqGSIb3DQEBCwUAMIGKMQswCQYDVQQGEwJVUzEQ
MA4GA1UECAwHQXJpem9uYTERMA8GA1UEBwwIR29vZHllYXIxGDAWBgNVBAoMD0xvY2toZWVkIE1h
cnRpbjENMAsGA1UECwwESTRDRTEPMA0GA1UEAwwGY2xpZW50MRwwGgYJKoZIhvcNAQkBFg1pNGNl
QGxtY28uY29tMB4XDTEyMDYyMDE5NDMwOVoXDTIyMDYxODE5NDMwOVowgYoxCzAJBgNVBAYTAlVT
MRAwDgYDVQQIDAdBcml6b25hMREwDwYDVQQHDAhHb29keWVhcjEYMBYGA1UECgwPTG9ja2hlZWQg
TWFydGluMQ0wCwYDVQQLDARJNENFMQ8wDQYDVQQDDAZjbGllbnQxHDAaBgkqhkiG9w0BCQEWDWk0
Y2VAbG1jby5jb20wgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAIpHxCBLYE7xfDLcITS9SsPG
4Q04Z6S32/+TriGsRgpGTj/7GuMG7oJ98m6Ws5cTYl7nyunyHTkZuP7rBzy4esDIHheyx18EgdSJ
vvACgGVCnEmHndkf9bWUlAOfNaxW+vZwljUkRUVdkhPbPdPwOcMdKg/SsLSNjZfsQIjoWd4rAgMB
AAGjUDBOMB0GA1UdDgQWBBQx11VLtYXLvFGpFdHnhlNW9+lxBDAfBgNVHSMEGDAWgBQx11VLtYXL
vFGpFdHnhlNW9+lxBDAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBCwUAA4GBAHYs2OI0K6yVXzyS
sKcv2fmfw6XCICGTnyA7BOdAjYoqq6wD+33dHJUCFDqye7AWdcivuc7RWJt9jnlfJZKIm2BHcDTR
Hhk6CvjJ14Gf40WQdeMHoX8U8b0diq7Iy5Ravx+zRg7SdiyJUqFYjRh/O5tywXRT1+freI3bwAN0
L6tQ
</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</wst:UseKey>
<wst:Renewing/>
</wst:RequestSecurityToken>
</soap:Body>
</soap:Envelope>
Response
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Header>
<Action xmlns="http://www.w3.org/2005/08/addressing">http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTRC/IssueFinal</Action>
<MessageID xmlns="http://www.w3.org/2005/08/addressing">urn:uuid:7a6fde04-9013-41ef-b08b-0689ffa9c93e</MessageID>
<To xmlns="http://www.w3.org/2005/08/addressing">http://www.w3.org/2005/08/addressing/anonymous</To>
<RelatesTo xmlns="http://www.w3.org/2005/08/addressing">urn:uuid:60652909-faca-4e4a-a4a7-8a5ce243a7cb</RelatesTo>
<wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" soap:mustUnderstand="1">
<wsu:Timestamp wsu:Id="TS-2">
<wsu:Created>2013-04-29T18:35:11.459Z</wsu:Created>
<wsu:Expires>2013-04-29T18:40:11.459Z</wsu:Expires>
</wsu:Timestamp>
</wsse:Security>
</soap:Header>
<soap:Body>
<RequestSecurityTokenResponseCollection xmlns="http://docs.oasis-open.org/ws-sx/ws-trust/200512" xmlns:ns2="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:ns3="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:ns4="http://www.w3.org/2005/08/addressing" xmlns:ns5="http://docs.oasis-open.org/ws-sx/ws-trust/200802">
<RequestSecurityTokenResponse>
<TokenType>http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0</TokenType>
<RequestedSecurityToken>
<saml2:Assertion xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ID="_BDC44EB8593F47D1B213672605113671" IssueInstant="2013-04-29T18:35:11.370Z" Version="2.0" xsi:type="saml2:AssertionType">
<saml2:Issuer>tokenissuer</saml2:Issuer>
<ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:SignedInfo>
<ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
<ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/>
<ds:Reference URI="#_BDC44EB8593F47D1B213672605113671">
<ds:Transforms>
<ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>
<ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#">
<ec:InclusiveNamespaces xmlns:ec="http://www.w3.org/2001/10/xml-exc-c14n#" PrefixList="xs"/>
</ds:Transform>
</ds:Transforms>
<ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/>
<ds:DigestValue>6wnWbft6Pz5XOF5Q9AG59gcGwLY=</ds:DigestValue>
</ds:Reference>
</ds:SignedInfo>
<ds:SignatureValue>h+NvkgXGdQtca3/eKebhAKgG38tHp3i2n5uLLy8xXXIg02qyKgEP0FCowp2LiYlsQU9YjKfSwCUbH3WR6jhbAv9zj29CE+ePfEny7MeXvgNl3wId+vcHqti/DGGhhgtO2Mbx/tyX1BhHQUwKRlcHajxHeecwmvV7D85NMdV48tI=</ds:SignatureValue>
<ds:KeyInfo>
<ds:X509Data>
<ds:X509Certificate>MIIDmjCCAwOgAwIBAgIBBDANBgkqhkiG9w0BAQQFADB1MQswCQYDVQQGEwJVUzEQMA4GA1UECBMH
QXJpem9uYTERMA8GA1UEBxMIR29vZHllYXIxEDAOBgNVBAoTB0V4YW1wbGUxEDAOBgNVBAoTB0V4
YW1wbGUxEDAOBgNVBAsTB0V4YW1wbGUxCzAJBgNVBAMTAkNBMB4XDTEzMDQwOTE4MzcxMVoXDTIz
MDQwNzE4MzcxMVowgaYxCzAJBgNVBAYTAlVTMRAwDgYDVQQIEwdBcml6b25hMREwDwYDVQQHEwhH
b29keWVhcjEQMA4GA1UEChMHRXhhbXBsZTEQMA4GA1UEChMHRXhhbXBsZTEQMA4GA1UECxMHRXhh
bXBsZTEUMBIGA1UEAxMLdG9rZW5pc3N1ZXIxJjAkBgkqhkiG9w0BCQEWF3Rva2VuaXNzdWVyQGV4
YW1wbGUuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDDfktpA8Lrp9rTfRibKdgtxtN9
uB44diiIqq3JOzDGfDhGLu6mjpuHO1hrKItv42hBOhhmH7lS9ipiaQCIpVfgIG63MB7fa5dBrfGF
G69vFrU1Lfi7IvsVVsNrtAEQljOMmw9sxS3SUsRQX+bD8jq7Uj1hpoF7DdqpV8Kb0COOGwIDAQAB
o4IBBjCCAQIwCQYDVR0TBAIwADAsBglghkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2Vy
dGlmaWNhdGUwHQYDVR0OBBYEFD1mHviop2Tc4HaNu8yPXR6GqWP1MIGnBgNVHSMEgZ8wgZyAFBcn
en6/j05DzaVwORwrteKc7TZOoXmkdzB1MQswCQYDVQQGEwJVUzEQMA4GA1UECBMHQXJpem9uYTER
MA8GA1UEBxMIR29vZHllYXIxEDAOBgNVBAoTB0V4YW1wbGUxEDAOBgNVBAoTB0V4YW1wbGUxEDAO
BgNVBAsTB0V4YW1wbGUxCzAJBgNVBAMTAkNBggkAwXk7OcwO7gwwDQYJKoZIhvcNAQEEBQADgYEA
PiTX5kYXwdhmijutSkrObKpRbQkvkkzcyZlO6VrAxRQ+eFeN6NyuyhgYy5K6l/sIWdaGou5iJOQx
2pQYWx1v8Klyl0W22IfEAXYv/epiO89hpdACryuDJpioXI/X8TAwvRwLKL21Dk3k2b+eyCgA0O++
HM0dPfiQLQ99ElWkv/0=</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</ds:Signature>
<saml2:Subject>
<saml2:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified" NameQualifier="http://cxf.apache.org/sts">srogers</saml2:NameID>
<saml2:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:holder-of-key">
<saml2:SubjectConfirmationData xsi:type="saml2:KeyInfoConfirmationDataType">
<ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:X509Data>
<ds:X509Certificate>MIIC5DCCAk2gAwIBAgIJAKj7ROPHjo1yMA0GCSqGSIb3DQEBCwUAMIGKMQswCQYDVQQGEwJVUzEQ
MA4GA1UECAwHQXJpem9uYTERMA8GA1UEBwwIR29vZHllYXIxGDAWBgNVBAoMD0xvY2toZWVkIE1h
cnRpbjENMAsGA1UECwwESTRDRTEPMA0GA1UEAwwGY2xpZW50MRwwGgYJKoZIhvcNAQkBFg1pNGNl
QGxtY28uY29tMB4XDTEyMDYyMDE5NDMwOVoXDTIyMDYxODE5NDMwOVowgYoxCzAJBgNVBAYTAlVT
MRAwDgYDVQQIDAdBcml6b25hMREwDwYDVQQHDAhHb29keWVhcjEYMBYGA1UECgwPTG9ja2hlZWQg
TWFydGluMQ0wCwYDVQQLDARJNENFMQ8wDQYDVQQDDAZjbGllbnQxHDAaBgkqhkiG9w0BCQEWDWk0
Y2VAbG1jby5jb20wgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAIpHxCBLYE7xfDLcITS9SsPG
4Q04Z6S32/+TriGsRgpGTj/7GuMG7oJ98m6Ws5cTYl7nyunyHTkZuP7rBzy4esDIHheyx18EgdSJ
vvACgGVCnEmHndkf9bWUlAOfNaxW+vZwljUkRUVdkhPbPdPwOcMdKg/SsLSNjZfsQIjoWd4rAgMB
AAGjUDBOMB0GA1UdDgQWBBQx11VLtYXLvFGpFdHnhlNW9+lxBDAfBgNVHSMEGDAWgBQx11VLtYXL
vFGpFdHnhlNW9+lxBDAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBCwUAA4GBAHYs2OI0K6yVXzyS
sKcv2fmfw6XCICGTnyA7BOdAjYoqq6wD+33dHJUCFDqye7AWdcivuc7RWJt9jnlfJZKIm2BHcDTR
Hhk6CvjJ14Gf40WQdeMHoX8U8b0diq7Iy5Ravx+zRg7SdiyJUqFYjRh/O5tywXRT1+freI3bwAN0
L6tQ</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</saml2:SubjectConfirmationData>
</saml2:SubjectConfirmation>
</saml2:Subject>
<saml2:Conditions NotBefore="2013-04-29T18:35:11.407Z" NotOnOrAfter="2013-04-29T19:05:11.407Z">
<saml2:AudienceRestriction>
<saml2:Audience>https://server:8993/services/SecurityTokenService</saml2:Audience>
</saml2:AudienceRestriction>
</saml2:Conditions>
<saml2:AuthnStatement AuthnInstant="2013-04-29T18:35:11.392Z">
<saml2:AuthnContext>
<saml2:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:unspecified</saml2:AuthnContextClassRef>
</saml2:AuthnContext>
</saml2:AuthnStatement>
<saml2:AttributeStatement>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">srogers</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">srogers@example.com</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">srogers</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">Steve Rogers</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">avengers</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">admin</saml2:AttributeValue>
</saml2:Attribute>
</saml2:AttributeStatement>
</saml2:Assertion>
</RequestedSecurityToken>
<RequestedAttachedReference>
<ns3:SecurityTokenReference xmlns:wsse11="http://docs.oasis-open.org/wss/oasis-wss-wssecurity-secext-1.1.xsd" wsse11:TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0">
<ns3:KeyIdentifier ValueType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLID">_BDC44EB8593F47D1B213672605113671</ns3:KeyIdentifier>
</ns3:SecurityTokenReference>
</RequestedAttachedReference>
<RequestedUnattachedReference>
<ns3:SecurityTokenReference xmlns:wsse11="http://docs.oasis-open.org/wss/oasis-wss-wssecurity-secext-1.1.xsd" wsse11:TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0">
<ns3:KeyIdentifier ValueType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLID">_BDC44EB8593F47D1B213672605113671</ns3:KeyIdentifier>
</ns3:SecurityTokenReference>
</RequestedUnattachedReference>
<wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy" xmlns:wst="http://docs.oasis-open.org/ws-sx/ws-trust/200512">
<wsa:EndpointReference xmlns:wsa="http://www.w3.org/2005/08/addressing">
<wsa:Address>https://server:8993/services/SecurityTokenService</wsa:Address>
</wsa:EndpointReference>
</wsp:AppliesTo>
<Lifetime>
<ns2:Created>2013-04-29T18:35:11.444Z</ns2:Created>
<ns2:Expires>2013-04-29T19:05:11.444Z</ns2:Expires>
</Lifetime>
</RequestSecurityTokenResponse>
</RequestSecurityTokenResponseCollection>
</soap:Body>
</soap:Envelope>
UsernameToken Bearer SAML Security Token Request/Response
To obtain a SAML assertion to use in secure communication to DDF, a RequestSecurityToken (RST) request has to be made to the STS.
A Bearer SAML assertion is automatically trusted by the endpoint. The client doesn’t have to prove it can own that SAML assertion. It is the simplest way to request a SAML assertion, but many endpoints won’t accept a KeyType of Bearer.
Request
Explanation
-
WS-Addressing header with Action, To, and Message ID
-
Valid, non-expired timestamp
-
Username Token containing a username and password that the STS will authenticate
-
Issued over HTTPS
-
KeyType of http://docs.oasis-open.org/ws-sx/ws-trust/200512/Bearer
-
Claims (optional): Some endpoints may require that the SAML assertion include attributes of the user, such as an authenticated user’s role, name identifier, email address, etc. If the SAML assertion needs those attributes, the RequestSecurityToken must specify which ones to include.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Header xmlns:wsa="http://www.w3.org/2005/08/addressing">
<wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" soap:mustUnderstand="1">
<wsu:Timestamp wsu:Id="TS-1">
<wsu:Created>2013-04-29T17:47:37.817Z</wsu:Created>
<wsu:Expires>2013-04-29T17:57:37.817Z</wsu:Expires>
</wsu:Timestamp>
<wsse:UsernameToken wsu:Id="UsernameToken-1">
<wsse:Username>srogers</wsse:Username>
<wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">password1</wsse:Password>
</wsse:UsernameToken>
</wsse:Security>
<wsa:Action>http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Issue</wsa:Action>
<wsa:MessageID>uuid:a1bba87b-0f00-46cc-975f-001391658cbe</wsa:MessageID>
<wsa:To>https://server:8993/services/SecurityTokenService</wsa:To>
</soap:Header>
<soap:Body>
<wst:RequestSecurityToken xmlns:wst="http://docs.oasis-open.org/ws-sx/ws-trust/200512">
<wst:SecondaryParameters>
<t:TokenType xmlns:t="http://docs.oasis-open.org/ws-sx/ws-trust/200512">http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0</t:TokenType>
<t:KeyType xmlns:t="http://docs.oasis-open.org/ws-sx/ws-trust/200512">http://docs.oasis-open.org/ws-sx/ws-trust/200512/Bearer</t:KeyType>
<t:Claims xmlns:ic="http://schemas.xmlsoap.org/ws/2005/05/identity" xmlns:t="http://docs.oasis-open.org/ws-sx/ws-trust/200512" Dialect="http://schemas.xmlsoap.org/ws/2005/05/identity">
<!--Add any additional claims you want to grab for the service-->
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/uid"/>
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role"/>
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier"/>
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress"/>
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname"/>
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname"/>
</t:Claims>
</wst:SecondaryParameters>
<wst:RequestType>http://docs.oasis-open.org/ws-sx/ws-trust/200512/Issue</wst:RequestType>
<wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy">
<wsa:EndpointReference xmlns:wsa="http://www.w3.org/2005/08/addressing">
<wsa:Address>https://server:8993/services/QueryService</wsa:Address>
</wsa:EndpointReference>
</wsp:AppliesTo>
<wst:Renewing/>
</wst:RequestSecurityToken>
</soap:Body>
</soap:Envelope>
Response
This is the response from the STS containing the SAML assertion to be used in subsequent requests to QCRUD endpoints:
The saml2:Assertion block contains the entire SAML assertion.
The Signature block contains a signature from the STS’s private key. The endpoint receiving the SAML assertion will verify that it trusts the signer and ensure that the message wasn’t tampered with.
The AttributeStatement block contains all the Claims requested.
The Lifetime block indicates the valid time interval in which the SAML assertion can be used.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Header>
<Action xmlns="http://www.w3.org/2005/08/addressing">http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTRC/IssueFinal</Action>
<MessageID xmlns="http://www.w3.org/2005/08/addressing">urn:uuid:eee4c6ef-ac10-4cbc-a53c-13d960e3b6e8</MessageID>
<To xmlns="http://www.w3.org/2005/08/addressing">http://www.w3.org/2005/08/addressing/anonymous</To>
<RelatesTo xmlns="http://www.w3.org/2005/08/addressing">uuid:a1bba87b-0f00-46cc-975f-001391658cbe</RelatesTo>
<wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" soap:mustUnderstand="1">
<wsu:Timestamp wsu:Id="TS-2">
<wsu:Created>2013-04-29T17:49:12.624Z</wsu:Created>
<wsu:Expires>2013-04-29T17:54:12.624Z</wsu:Expires>
</wsu:Timestamp>
</wsse:Security>
</soap:Header>
<soap:Body>
<RequestSecurityTokenResponseCollection xmlns="http://docs.oasis-open.org/ws-sx/ws-trust/200512" xmlns:ns2="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:ns3="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:ns4="http://www.w3.org/2005/08/addressing" xmlns:ns5="http://docs.oasis-open.org/ws-sx/ws-trust/200802">
<RequestSecurityTokenResponse>
<TokenType>http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0</TokenType>
<RequestedSecurityToken>
<saml2:Assertion xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ID="_7437C1A55F19AFF22113672577526132" IssueInstant="2013-04-29T17:49:12.613Z" Version="2.0" xsi:type="saml2:AssertionType">
<saml2:Issuer>tokenissuer</saml2:Issuer>
<ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:SignedInfo>
<ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
<ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/>
<ds:Reference URI="#_7437C1A55F19AFF22113672577526132">
<ds:Transforms>
<ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>
<ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#">
<ec:InclusiveNamespaces xmlns:ec="http://www.w3.org/2001/10/xml-exc-c14n#" PrefixList="xs"/>
</ds:Transform>
</ds:Transforms>
<ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/>
<ds:DigestValue>ReOqEbGZlyplW5kqiynXOjPnVEA=</ds:DigestValue>
</ds:Reference>
</ds:SignedInfo>
<ds:SignatureValue>X5Kzd54PrKIlGVV2XxzCmWFRzHRoybF7hU6zxbEhSLMR0AWS9R7Me3epq91XqeOwvIDDbwmE/oJNC7vI0fIw/rqXkx4aZsY5a5nbAs7f+aXF9TGdk82x2eNhNGYpViq0YZJfsJ5WSyMtG8w5nRekmDMy9oTLsHG+Y/OhJDEwq58=</ds:SignatureValue>
<ds:KeyInfo>
<ds:X509Data>
<ds:X509Certificate>MIIDmjCCAwOgAwIBAgIBBDANBgkqhkiG9w0BAQQFADB1MQswCQYDVQQGEwJVUzEQMA4GA1UECBMH
QXJpem9uYTERMA8GA1UEBxMIR29vZHllYXIxEDAOBgNVBAoTB0V4YW1wbGUxEDAOBgNVBAoTB0V4
YW1wbGUxEDAOBgNVBAsTB0V4YW1wbGUxCzAJBgNVBAMTAkNBMB4XDTEzMDQwOTE4MzcxMVoXDTIz
MDQwNzE4MzcxMVowgaYxCzAJBgNVBAYTAlVTMRAwDgYDVQQIEwdBcml6b25hMREwDwYDVQQHEwhH
b29keWVhcjEQMA4GA1UEChMHRXhhbXBsZTEQMA4GA1UEChMHRXhhbXBsZTEQMA4GA1UECxMHRXhh
bXBsZTEUMBIGA1UEAxMLdG9rZW5pc3N1ZXIxJjAkBgkqhkiG9w0BCQEWF3Rva2VuaXNzdWVyQGV4
YW1wbGUuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDDfktpA8Lrp9rTfRibKdgtxtN9
uB44diiIqq3JOzDGfDhGLu6mjpuHO1hrKItv42hBOhhmH7lS9ipiaQCIpVfgIG63MB7fa5dBrfGF
G69vFrU1Lfi7IvsVVsNrtAEQljOMmw9sxS3SUsRQX+bD8jq7Uj1hpoF7DdqpV8Kb0COOGwIDAQAB
o4IBBjCCAQIwCQYDVR0TBAIwADAsBglghkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2Vy
dGlmaWNhdGUwHQYDVR0OBBYEFD1mHviop2Tc4HaNu8yPXR6GqWP1MIGnBgNVHSMEgZ8wgZyAFBcn
en6/j05DzaVwORwrteKc7TZOoXmkdzB1MQswCQYDVQQGEwJVUzEQMA4GA1UECBMHQXJpem9uYTER
MA8GA1UEBxMIR29vZHllYXIxEDAOBgNVBAoTB0V4YW1wbGUxEDAOBgNVBAoTB0V4YW1wbGUxEDAO
BgNVBAsTB0V4YW1wbGUxCzAJBgNVBAMTAkNBggkAwXk7OcwO7gwwDQYJKoZIhvcNAQEEBQADgYEA
PiTX5kYXwdhmijutSkrObKpRbQkvkkzcyZlO6VrAxRQ+eFeN6NyuyhgYy5K6l/sIWdaGou5iJOQx
2pQYWx1v8Klyl0W22IfEAXYv/epiO89hpdACryuDJpioXI/X8TAwvRwLKL21Dk3k2b+eyCgA0O++
HM0dPfiQLQ99ElWkv/0=</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</ds:Signature>
<saml2:Subject>
<saml2:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified" NameQualifier="http://cxf.apache.org/sts">srogers</saml2:NameID>
<saml2:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer"/>
</saml2:Subject>
<saml2:Conditions NotBefore="2013-04-29T17:49:12.614Z" NotOnOrAfter="2013-04-29T18:19:12.614Z">
<saml2:AudienceRestriction>
<saml2:Audience>https://server:8993/services/QueryService</saml2:Audience>
</saml2:AudienceRestriction>
</saml2:Conditions>
<saml2:AuthnStatement AuthnInstant="2013-04-29T17:49:12.613Z">
<saml2:AuthnContext>
<saml2:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:unspecified</saml2:AuthnContextClassRef>
</saml2:AuthnContext>
</saml2:AuthnStatement>
<saml2:AttributeStatement>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">srogers</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">srogers@example.com</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">srogers</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">Steve Rogers</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">avengers</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">admin</saml2:AttributeValue>
</saml2:Attribute>
</saml2:AttributeStatement>
</saml2:Assertion>
</RequestedSecurityToken>
<RequestedAttachedReference>
<ns3:SecurityTokenReference xmlns:wsse11="http://docs.oasis-open.org/wss/oasis-wss-wssecurity-secext-1.1.xsd" wsse11:TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0">
<ns3:KeyIdentifier ValueType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLID">_7437C1A55F19AFF22113672577526132</ns3:KeyIdentifier>
</ns3:SecurityTokenReference>
</RequestedAttachedReference>
<RequestedUnattachedReference>
<ns3:SecurityTokenReference xmlns:wsse11="http://docs.oasis-open.org/wss/oasis-wss-wssecurity-secext-1.1.xsd" wsse11:TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0">
<ns3:KeyIdentifier ValueType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLID">_7437C1A55F19AFF22113672577526132</ns3:KeyIdentifier>
</ns3:SecurityTokenReference>
</RequestedUnattachedReference>
<wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy" xmlns:wst="http://docs.oasis-open.org/ws-sx/ws-trust/200512">
<wsa:EndpointReference xmlns:wsa="http://www.w3.org/2005/08/addressing">
<wsa:Address>https://server:8993/services/QueryService</wsa:Address>
</wsa:EndpointReference>
</wsp:AppliesTo>
<Lifetime>
<ns2:Created>2013-04-29T17:49:12.620Z</ns2:Created>
<ns2:Expires>2013-04-29T18:19:12.620Z</ns2:Expires>
</Lifetime>
</RequestSecurityTokenResponse>
</RequestSecurityTokenResponseCollection>
</soap:Body>
</soap:Envelope>
X.509 PublicKey SAML Security Token Request/Response
In order to obtain a SAML assertion to use in secure communication to DDF, a RequestSecurityToken (RST) request has to be made to the STS.
An endpoint’s policy will specify the type of security token needed. Most of the endpoints that have been used with DDF require a SAML v2.0 assertion with a required KeyType of http://docs.oasis-open.org/ws-sx/ws-trust/200512/PublicKey. This means that the SAML assertion provided by the client to a DDF endpoint must contain a SubjectConfirmation block with a type of "holder-of-key" containing the client’s public key. This is used to prove that the client can possess the SAML assertion returned by the STS.
Request
Explanation The STS that comes with DDF requires the following to be in the RequestSecurityToken request in order to issue a valid SAML assertion. See the request block below for an example of how these components should be populated. * WS-Addressing header containing Action, To, and MessageID blocks * Valid, non-expired timestamp * Issued over HTTPS * TokenType of http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0 * KeyType of http://docs.oasis-open.org/ws-sx/ws-trust/200512/PublicKey * X509 Certificate as the Proof of Possession or POP. This needs to be the certificate of the client that will be both requesting the SAML assertion and using the SAML assertion to issue a query * Claims (optional): Some endpoints may require that the SAML assertion include attributes of the user, such as an authenticated user’s role, name identifier, email address, etc. If the SAML assertion needs those attributes, the RequestSecurityToken must specify which ones to include. ** UsernameToken: If Claims are required, the RequestSecurityToken security header must contain a UsernameToken element with a username and password.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
<soapenv:Envelope xmlns:ns="http://docs.oasis-open.org/ws-sx/ws-trust/200512" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
<soapenv:Header xmlns:wsa="http://www.w3.org/2005/08/addressing">
<wsa:Action>http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Issue</wsa:Action>
<wsa:MessageID>uuid:527243af-94bd-4b5c-a1d8-024fd7e694c5</wsa:MessageID>
<wsa:To>https://server:8993/services/SecurityTokenService</wsa:To>
<wsse:Security soapenv:mustUnderstand="1" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">
<wsu:Timestamp wsu:Id="TS-17">
<wsu:Created>2014-02-19T17:30:40.771Z</wsu:Created>
<wsu:Expires>2014-02-19T19:10:40.771Z</wsu:Expires>
</wsu:Timestamp>
<!-- OPTIONAL: Only required if the endpoint that the SAML assertion will be sent to requires claims. -->
<wsse:UsernameToken wsu:Id="UsernameToken-16">
<wsse:Username>pparker</wsse:Username>
<wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">password1</wsse:Password>
<wsse:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary">LCTD+5Y7hlWIP6SpsEg9XA==</wsse:Nonce>
<wsu:Created>2014-02-19T17:30:37.355Z</wsu:Created>
</wsse:UsernameToken>
</wsse:Security>
</soapenv:Header>
<soapenv:Body>
<wst:RequestSecurityToken xmlns:wst="http://docs.oasis-open.org/ws-sx/ws-trust/200512">
<wst:TokenType>http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0</wst:TokenType>
<wst:KeyType>http://docs.oasis-open.org/ws-sx/ws-trust/200512/PublicKey</wst:KeyType>
<!-- OPTIONAL: Only required if the endpoint that the SAML assertion will be sent to requires claims. -->
<wst:Claims Dialect="http://schemas.xmlsoap.org/ws/2005/05/identity" xmlns:ic="http://schemas.xmlsoap.org/ws/2005/05/identity">
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role"/>
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier"/>
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress"/>
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname"/>
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname"/>
</wst:Claims>
<wst:RequestType>http://docs.oasis-open.org/ws-sx/ws-trust/200512/Issue</wst:RequestType>
<wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy">
<wsa:EndpointReference xmlns:wsa="http://www.w3.org/2005/08/addressing">
<wsa:Address>https://server:8993/services/QueryService</wsa:Address>
</wsa:EndpointReference>
</wsp:AppliesTo>
<wst:UseKey>
<ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:X509Data>
<ds:X509Certificate>MIIFGDCCBACgAwIBAgICJe0wDQYJKoZIhvcNAQEFBQAwXDELMAkGA1UEBhMCVVMxGDAWBgNVBAoT
D1UuUy4gR292ZXJubWVudDEMMAoGA1UECxMDRG9EMQwwCgYDVQQLEwNQS0kxFzAVBgNVBAMTDkRP
RCBKSVRDIENBLTI3MB4XDTEzMDUwNzAwMjU0OVoXDTE2MDUwNzAwMjU0OVowaTELMAkGA1UEBhMC
VVMxGDAWBgNVBAoTD1UuUy4gR292ZXJubWVudDEMMAoGA1UECxMDRG9EMQwwCgYDVQQLEwNQS0kx
EzARBgNVBAsTCkNPTlRSQUNUT1IxDzANBgNVBAMTBmNsaWVudDCCASIwDQYJKoZIhvcNAQEBBQAD
ggEPADCCAQoCggEBAOq6L1/jjZ5cyhjhHEbOHr5WQpboKACYbrsn8lg85LGNoAfcwImr9KBmOxGb
ZCxHYIhkW7pJ+kppyH8DbbbDMviIvvdkvrAIU0l8OBRn2wReCBGQ01Imdc3+WzFF2svW75d6wii2ZVd
eMvUO15p/pAD/sdIfXmAfyu8+tqtiO8KVZGkTnlg3AMzfeSrkci5UHMVWj0qUSuzLk9SAg/9STgb
Kf2xBpHUYecWFSB+dTpdZN2pC85tj9xIoWGh5dFWG1fPcYRgzGPxsybiGOylbJ7rHDJuL7IIIyx5
EnkCuxmQwoQ6XQAhiWRGyPlY08w1LZixI2v+Cv/ZjUfIHv49I9P4Mt8CAwEAAaOCAdUwggHRMB8G
A1UdIwQYMBaAFCMUNCBNXy43NZLBBlnDjDplNZJoMB0GA1UdDgQWBBRPGiX6zZzKTqQSx/tjg6hx
9opDoTAOBgNVHQ8BAf8EBAMCBaAwgdoGA1UdHwSB0jCBzzA2oDSgMoYwaHR0cDovL2NybC5nZHMu
bml0LmRpc2EubWlsL2NybC9ET0RKSVRDQ0FfMjcuY3JsMIGUoIGRoIGOhoGLbGRhcDovL2NybC5n
ZHMubml0LmRpc2EubWlsL2NuJTNkRE9EJTIwSklUQyUyMENBLTI3JTJjb3UlM2RQS0klMmNvdSUz
ZERvRCUyY28lM2RVLlMuJTIwR292ZXJubWVudCUyY2MlM2RVUz9jZXJ0aWZpY2F0ZXJldm9jYXRp
b25saXN0O2JpbmFyeTAjBgNVHSAEHDAaMAsGCWCGSAFlAgELBTALBglghkgBZQIBCxIwfQYIKwYB
BQUHAQEEcTBvMD0GCCsGAQUFBzAChjFodHRwOi8vY3JsLmdkcy5uaXQuZGlzYS5taWwvc2lnbi9E
T0RKSVRDQ0FfMjcuY2VyMC4GCCsGAQUFBzABhiJodHRwOi8vb2NzcC5uc24wLnJjdnMubml0LmRp
c2EubWlsMA0GCSqGSIb3DQEBBQUAA4IBAQCGUJPGh4iGCbr2xCMqCq04SFQ+iaLmTIFAxZPFvup1
4E9Ir6CSDalpF9eBx9fS+Z2xuesKyM/g3YqWU1LtfWGRRIxzEujaC4YpwHuffkx9QqkwSkXXIsim
EhmzSgzxnT4Q9X8WwalqVYOfNZ6sSLZ8qPPFrLHkkw/zIFRzo62wXLu0tfcpOr+iaJBhyDRinIHr
hwtE3xo6qQRRWlO3/clC4RnTev1crFVJQVBF3yfpRu8udJ2SOGdqU0vjUSu1h7aMkYJMHIu08Whj
8KASjJBFeHPirMV1oddJ5ydZCQ+Jmnpbwq+XsCxg1LjC4dmbjKVr9s4QK+/JLNjxD8IkJiZE</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</wst:UseKey>
</wst:RequestSecurityToken>
</soapenv:Body>
</soapenv:Envelope>
Response
Explanation This is the response from the STS containing the SAML assertion to be used in subsequent requests to QCRUD endpoints.
The saml2:Assertion block contains the entire SAML assertion.
The Signature block contains a signature from the STS’s private key. The endpoint receiving the SAML assertion will verify that it trusts the signer and ensure that the message wasn’t tampered with.
The SubjectConfirmation block contains the client’s public key, so the server can verify that the client has permission to hold this SAML assertion.
The AttributeStatement block contains all of the claims requested.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Header>
<Action xmlns="http://www.w3.org/2005/08/addressing">http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTRC/IssueFinal</Action>
<MessageID xmlns="http://www.w3.org/2005/08/addressing">urn:uuid:b46c35ad-3120-4233-ae07-b9e10c7911f3</MessageID>
<To xmlns="http://www.w3.org/2005/08/addressing">http://www.w3.org/2005/08/addressing/anonymous</To>
<RelatesTo xmlns="http://www.w3.org/2005/08/addressing">uuid:527243af-94bd-4b5c-a1d8-024fd7e694c5</RelatesTo>
<wsse:Security soap:mustUnderstand="1" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">
<wsu:Timestamp wsu:Id="TS-90DBA0754E55B4FE7013928310431357">
<wsu:Created>2014-02-19T17:30:43.135Z</wsu:Created>
<wsu:Expires>2014-02-19T17:35:43.135Z</wsu:Expires>
</wsu:Timestamp>
</wsse:Security>
</soap:Header>
<soap:Body>
<ns2:RequestSecurityTokenResponseCollection xmlns="http://docs.oasis-open.org/ws-sx/ws-trust/200802" xmlns:ns2="http://docs.oasis-open.org/ws-sx/ws-trust/200512" xmlns:ns3="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:ns4="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:ns5="http://www.w3.org/2005/08/addressing">
<ns2:RequestSecurityTokenResponse>
<ns2:TokenType>http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0</ns2:TokenType>
<ns2:RequestedSecurityToken>
<saml2:Assertion ID="_90DBA0754E55B4FE7013928310431176" IssueInstant="2014-02-19T17:30:43.117Z" Version="2.0" xsi:type="saml2:AssertionType" xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<saml2:Issuer>tokenissuer</saml2:Issuer>
<ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:SignedInfo>
<ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
<ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/>
<ds:Reference URI="#_90DBA0754E55B4FE7013928310431176">
<ds:Transforms>
<ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>
<ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#">
<ec:InclusiveNamespaces PrefixList="xs" xmlns:ec="http://www.w3.org/2001/10/xml-exc-c14n#"/>
</ds:Transform>
</ds:Transforms>
<ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/>
<ds:DigestValue>/bEGqsRGHVJbx298WPmGd8I53zs=</ds:DigestValue>
</ds:Reference>
</ds:SignedInfo>
<ds:SignatureValue>
mYR7w1/dnuh8Z7t9xjCb4XkYQLshj+UuYlGOuTwDYsUPcS2qI0nAgMD1VsDP7y1fDJxeqsq7HYhFKsnqRfebMM4WLH1D/lJ4rD4UO+i9l3tuiHml7SN24WM1/bOqfDUCoDqmwG8afUJ3r4vmTNPxfwfOss8BZ/8ODgZzm08ndlkxDfvcN7OrExbV/3/45JwF/MMPZoqvi2MJGfX56E9fErJNuzezpWnRqPOlWPxyffKMAlVaB9zF6gvVnUqcW2k/Z8X9lN7O5jouBI281ZnIfsIPuBJERFtYNVDHsIXM1pJnrY6FlKIaOsi55LQu3Ruir/n82pU7BT5aWtxwrn7akBg== </ds:SignatureValue>
<ds:KeyInfo>
<ds:X509Data>
<ds:X509Certificate>MIIFHTCCBAWgAwIBAgICJe8wDQYJKoZIhvcNAQEFBQAwXDELMAkGA1UEBhMCVVMxGDAWBgNVBAoT
D1UuUy4gR292ZXJubWVudDEMMAoGA1UECxMDRG9EMQwwCgYDVQQLEwNQS0kxFzAVBgNVBAMTDkRP
RCBKSVRDIENBLTI3MB4XDTEzMDUwNzAwMjYzN1oXDTE2MDUwNzAwMjYzN1owbjELMAkGA1UEBhMC
VVMxGDAWBgNVBAoTD1UuUy4gR292ZXJubWVudDEMMAoGA1UECxMDRG9EMQwwCgYDVQQLEwNQS0kx
EzARBgNVBAsTCkNPTlRSQUNUT1IxFDASBgNVBAMTC3Rva2VuaXNzdWVyMIIBIjANBgkqhkiG9w0B
AQEFAAOCAQ8AMIIBCgKCAQEAx01/U4M1wG+wL1JxX2RL1glj101FkJXMk3KFt3zD//N8x/Dcwwvs
ngCQjXrV6YhbB2V7scHwnThPv3RSwYYiO62z+g6ptfBbKGGBLSZOzLe3fyJR4RxblFKsELFgPHfX
vgUHS/keG5uSRk9S/Okqps/yxKB7+ZlxeFxsIz5QywXvBpMiXtc2zF+M7BsbSIdSx5LcPcDFBwjF
c66rE3/y/25VMht9EZX1QoKr7f8rWD4xgd5J6DYMFWEcmiCz4BDJH9sfTw+n1P+CYgrhwslWGqxt
cDME9t6SWR3GLT4Sdtr8ziIM5uUteEhPIV3rVC3/u23JbYEeS8mpnp0bxt5eHQIDAQABo4IB1TCC
AdEwHwYDVR0jBBgwFoAUIxQ0IE1fLjc1ksEGWcOMOmU1kmgwHQYDVR0OBBYEFGBjdkdey+bMHMhC
Z7gwiQ/mJf5VMA4GA1UdDwEB/wQEAwIFoDCB2gYDVR0fBIHSMIHPMDagNKAyhjBodHRwOi8vY3Js
Lmdkcy5uaXQuZGlzYS5taWwvY3JsL0RPREpJVENDQV8yNy5jcmwwgZSggZGggY6GgYtsZGFwOi8v
Y3JsLmdkcy5uaXQuZGlzYS5taWwvY24lM2RET0QlMjBKSVRDJTIwQ0EtMjclMmNvdSUzZFBLSSUy
Y291JTNkRG9EJTJjbyUzZFUuUy4lMjBHb3Zlcm5tZW50JTJjYyUzZFVTP2NlcnRpZmljYXRlcmV2
b2NhdGlvbmxpc3Q7YmluYXJ5MCMGA1UdIAQcMBowCwYJYIZIAWUCAQsFMAsGCWCGSAFlAgELEjB9
BggrBgEFBQcBAQRxMG8wPQYIKwYBBQUHMAKGMWh0dHA6Ly9jcmwuZ2RzLm5pdC5kaXNhLm1pbC9z
aWduL0RPREpJVENDQV8yNy5jZXIwLgYIKwYBBQUHMAGGImh0dHA6Ly9vY3NwLm5zbjAucmN2cy5u
aXQuZGlzYS5taWwwDQYJKoZIhvcNAQEFBQADggEBAIHZQTINU3bMpJ/PkwTYLWPmwCqAYgEUzSYx
bNcVY5MWD8b4XCdw5nM3GnFlOqr4IrHeyyOzsEbIebTe3bv0l1pHx0Uyj059nAhx/AP8DjVtuRU1
/Mp4b6uJ/4yaoMjIGceqBzHqhHIJinG0Y2azua7eM9hVbWZsa912ihbiupCq22mYuHFP7NUNzBvV
j03YUcsy/sES5sRx9Rops/CBN+LUUYOdJOxYWxo8oAbtF8ABE5ATLAwqz4ttsToKPUYh1sxdx5Ef
APeZ+wYDmMu4OfLckwnCKZgkEtJOxXpdIJHY+VmyZtQSB0LkR5toeH/ANV4259Ia5ZT8h2/vIJBg
6B4=</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</ds:Signature>
<saml2:Subject>
<saml2:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified" NameQualifier="http://cxf.apache.org/sts">pparker</saml2:NameID>
<saml2:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:holder-of-key">
<saml2:SubjectConfirmationData xsi:type="saml2:KeyInfoConfirmationDataType">
<ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:X509Data>
<ds:X509Certificate>MIIFGDCCBACgAwIBAgICJe0wDQYJKoZIhvcNAQEFBQAwXDELMAkGA1UEBhMCVVMxGDAWBgNVBAoT
D1UuUy4gR292ZXJubWVudDEMMAoGA1UECxMDRG9EMQwwCgYDVQQLEwNQS0kxFzAVBgNVBAMTDkRP
RCBKSVRDIENBLTI3MB4XDTEzMDUwNzAwMjU0OVoXDTE2MDUwNzAwMjU0OVowaTELMAkGA1UEBhMC
VVMxGDAWBgNVBAoTD1UuUy4gR292ZXJubWVudDEMMAoGA1UECxMDRG9EMQwwCgYDVQQLEwNQS0kx
EzARBgNVBAsTCkNPTlRSQUNUT1IxDzANBgNVBAMTBmNsaWVudDCCASIwDQYJKoZIhvcNAQEBBQAD
ggEPADCCAQoCggEBAOq6L1/jjZ5cyhjhHEbOHr5WQpboKACYbrsn8lg85LGNoAfcwImr9KBmOxGb
ZCxHYIhkW7pJ+kppyH8bbbviIvvdkvrAIU0l8OBRn2wReCBGQ01Imdc3+WzFF2svW75d6wii2ZVd
eMvUO15p/pAD/sdIfXmAfyu8+tqtiO8KVZGkTnlg3AMzfeSrkci5UHMVWj0qUSuzLk9SAg/9STgb
Kf2xBpHUYecWFSB+dTpdZN2pC85tj9xIoWGh5dFWG1fPcYRgzGPxsybiGOylbJ7rHDJuL7IIIyx5
EnkCuxmQwoQ6XQAhiWRGyPlY08w1LZixI2v+Cv/ZjUfIHv49I9P4Mt8CAwEAAaOCAdUwggHRMB8G
A1UdIwQYMBaAFCMUNCBNXy43NZLBBlnDjDplNZJoMB0GA1UdDgQWBBRPGiX6zZzKTqQSx/tjg6hx
9opDoTAOBgNVHQ8BAf8EBAMCBaAwgdoGA1UdHwSB0jCBzzA2oDSgMoYwaHR0cDovL2NybC5nZHMu
bml0LmRpc2EubWlsL2NybC9ET0RKSVRDQ0FfMjcuY3JsMIGUoIGRoIGOhoGLbGRhcDovL2NybC5n
ZHMubml0LmRpc2EubWlsL2NuJTNkRE9EJTIwSklUQyUyMENBLTI3JTJjb3UlM2RQS0klMmNvdSUz
ZERvRCUyY28lM2RVLlMuJTIwR292ZXJubWVudCUyY2MlM2RVUz9jZXJ0aWZpY2F0ZXJldm9jYXRp
b25saXN0O2JpbmFyeTAjBgNVHSAEHDAaMAsGCWCGSAFlAgELBTALBglghkgBZQIBCxIwfQYIKwYB
BQUHAQEEcTBvMD0GCCsGAQUFBzAChjFodHRwOi8vY3JsLmdkcy5uaXQuZGlzYS5taWwvc2lnbi9E
T0RKSVRDQ0FfMjcuY2VyMC4GCCsGAQUFBzABhiJodHRwOi8vb2NzcC5uc24wLnJjdnMubml0LmRp
c2EubWlsMA0GCSqGSIb3DQEBBQUAA4IBAQCGUJPGh4iGCbr2xCMqCq04SFQ+iaLmTIFAxZPFvup1
4E9Ir6CSDalpF9eBx9fS+Z2xuesKyM/g3YqWU1LtfWGRRIxzEujaC4YpwHuffkx9QqkwSkXXIsim
EhmzSgzxnT4Q9X8WwalqVYOfNZ6sSLZ8qPPFrLHkkw/zIFRzo62wXLu0tfcpOr+iaJBhyDRinIHr
hwtE3xo6qQRRWlO3/clC4RnTev1crFVJQVBF3yfpRu8udJ2SOGdqU0vjUSu1h7aMkYJMHIu08Whj
8KASjJBFeHPirMV1oddJ5ydZCQ+Jmnpbwq+XsCxg1LjC4dmbjKVr9s4QK+/JLNjxD8IkJiZE</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</saml2:SubjectConfirmationData>
</saml2:SubjectConfirmation>
</saml2:Subject>
<saml2:Conditions NotBefore="2014-02-19T17:30:43.119Z" NotOnOrAfter="2014-02-19T18:00:43.119Z"/>
<saml2:AuthnStatement AuthnInstant="2014-02-19T17:30:43.117Z">
<saml2:AuthnContext>
<saml2:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:unspecified</saml2:AuthnContextClassRef>
</saml2:AuthnContext>
</saml2:AuthnStatement>
<!-- This block will only be included if Claims were requested in the RST. -->
<saml2:AttributeStatement>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">pparker</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">pparker@example.com</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">pparker</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">Peter Parker</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">users</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">users</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">avengers</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">admin</saml2:AttributeValue>
</saml2:Attribute>
</saml2:AttributeStatement>
</saml2:Assertion>
</ns2:RequestedSecurityToken>
<ns2:RequestedAttachedReference>
<ns4:SecurityTokenReference wsse11:TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0" xmlns:wsse11="http://docs.oasis-open.org/wss/oasis-wss-wssecurity-secext-1.1.xsd">
<ns4:KeyIdentifier ValueType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLID">_90DBA0754E55B4FE7013928310431176</ns4:KeyIdentifier>
</ns4:SecurityTokenReference>
</ns2:RequestedAttachedReference>
<ns2:RequestedUnattachedReference>
<ns4:SecurityTokenReference wsse11:TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0" xmlns:wsse11="http://docs.oasis-open.org/wss/oasis-wss-wssecurity-secext-1.1.xsd">
<ns4:KeyIdentifier ValueType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLID">_90DBA0754E55B4FE7013928310431176</ns4:KeyIdentifier>
</ns4:SecurityTokenReference>
</ns2:RequestedUnattachedReference>
<ns2:Lifetime>
<ns3:Created>2014-02-19T17:30:43.119Z</ns3:Created>
<ns3:Expires>2014-02-19T18:00:43.119Z</ns3:Expires>
</ns2:Lifetime>
</ns2:RequestSecurityTokenResponse>
</ns2:RequestSecurityTokenResponseCollection>
</soap:Body>
</soap:Envelope>
Authz filter
The Authz filter determines authentication of the subject properties by calling the policy manager. It then sends the subject through Shiro to a SimpleAuthZ Realm
|
The SimpleAuthz realm can be replaced by a XACML implementation of the same functionality. |
The SimpleAuthZ realm will call the expansion service to translate any properties associated with the subject and returns a boolean value if authorization is met. Upon receiving the correct value, the AuthZ filter allows access to the endpoint.
XACML Policy Decision Point (PDP)
After unzipping the DDF distribution, place the desired XACML policy in the <distribution root>/etc/pdp/policies directory. This is the directory in which the PDP will look for XACML policies every 60 seconds. A sample XACML policy is located at the end of this page. Information on specific bundle configurations and names can be found on the Security PDP application page.
Creating a Policy
This document assumes familiarity with the XACML schema and does not go into detail on the XACML language. There are some DDF-specific items that need to be considered, when creating a policy, to be compatible with the XACMLRealm. When creating a policy, a target is used to indicate that a certain action should be run only for one type of request. Targets can be used on both the main policy element and any individual rules. Generally targets are geared toward the actions that are set in the request.
Actions
For DDF, these actions are populated by various components in the security API. The actions and their population location are identified in the following table.
Operation |
Action-id Value |
Component Setting the action |
Description |
Filtering |
filter |
security-pdp-xacmlrealm |
When performing any filtering, the XACMLRealm will set the action-id to "filter". |
Service |
<SOAPAction> |
security-pep-interceptor |
If the PEP Interceptor is added to any SOAP-based web services for service authorization, the action-id will be the SOAPAction of the incoming request. This allows the XACML policy to have specific rules for individual services within the system. |
|
These are only the action-id values that are currently created by the components that come with DDF. Additional components can be created and added to DDF to identify specific action-ids. |
In the examples below, the policy has specified targets for the above type of calls. For the filtering code, the target was set for "filter", and the Service validation code targets were geared toward two services: query and LocalSiteName. In a production environment, these actions for service authorization will generally be full URNs that are described within the SOAP WSDL.
Attributes
Attributes for the XACML request are populated with the information in the calling subject and the resource being checked.
Subject
The attributes for the subject are obtained from the SAML claims and populated within the XACMLRealm as individual attributes under the urn:oasis:names:tc:xacml:1.0:subject-category:access-subject category. The name of the claim is used for the AttributeId value. Examples of the items being populated are available at the end of this page.
Resource
The attributes for resources are obtained through the permissions process. When checking permissions, the XACMLRealm retrieves a list of permissions that should be checked against the subject. These permissions are populated outside of the realm and should be populated with the security attributes located in the metacard security property. When the permissions are of a key-value type, the key being used is populated as the AttributeId value under the urn:oasis:names:tc:xacml:3.0:attribute-category:resource category.
Example Requests and Responses
The following items are a sample request, response, and the corresponding policy. For the XACML PDP, the request is made by the XACML realm (security-pdp-xacmlrealm), passed to the XACML processing engine (security-pdp-xacmlprocessor), which reads the policy and outputs a response.
Policy
This is the sample policy that was used for the following sample request and responses. The policy was made to handle the following actions: filter, query, and LocalSiteName. The filter action is used to compare subject’s SUBJECT_ACCESS attributes to metacard’s RESOURCE_ACCESS attributes. The query and LocalSiteName actions differ, as they are used to perform service authorization. For a query, the user must be associated with the country code ATA (Antarctica), and a LocalSiteName action can be performed by anyone.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
<Policy xmlns="urn:oasis:names:tc:xacml:3.0:core:schema:wd-17" PolicyId="xpath-target-single-req" RuleCombiningAlgId="urn:oasis:names:tc:xacml:3.0:rule-combining-algorithm:permit-overrides" Version="1.0">
<PolicyDefaults>
<XPathVersion>http://www.w3.org/TR/1999/REC-xpath-19991116</XPathVersion>
</PolicyDefaults>
<Target>
<AnyOf>
<AllOf>
<Match MatchId="urn:oasis:names:tc:xacml:1.0:function:string-equal">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">filter</AttributeValue>
<AttributeDesignator AttributeId="urn:oasis:names:tc:xacml:1.0:action:action-id" Category="urn:oasis:names:tc:xacml:3.0:attribute-category:action" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="true"/>
</Match>
</AllOf>
<AllOf>
<Match MatchId="urn:oasis:names:tc:xacml:1.0:function:string-equal">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">query</AttributeValue>
<AttributeDesignator AttributeId="urn:oasis:names:tc:xacml:1.0:action:action-id" Category="urn:oasis:names:tc:xacml:3.0:attribute-category:action" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="true"/>
</Match>
</AllOf>
<AllOf>
<Match MatchId="urn:oasis:names:tc:xacml:1.0:function:string-equal">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">LocalSiteName</AttributeValue>
<AttributeDesignator AttributeId="urn:oasis:names:tc:xacml:1.0:action:action-id" Category="urn:oasis:names:tc:xacml:3.0:attribute-category:action" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="true"/>
</Match>
</AllOf>
</AnyOf>
</Target>
<Rule Effect="Permit" RuleId="permit-filter">
<Target>
<AnyOf>
<AllOf>
<Match MatchId="urn:oasis:names:tc:xacml:1.0:function:string-equal">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">filter</AttributeValue>
<AttributeDesignator AttributeId="urn:oasis:names:tc:xacml:1.0:action:action-id" Category="urn:oasis:names:tc:xacml:3.0:attribute-category:action" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="true"/>
</Match>
</AllOf>
</AnyOf>
</Target>
<Condition>
<Apply FunctionId="urn:oasis:names:tc:xacml:1.0:function:string-subset">
<AttributeDesignator AttributeId="RESOURCE_ACCESS" Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="true"/>
<AttributeDesignator AttributeId="SUBJECT_ACCESS" Category="urn:oasis:names:tc:xacml:1.0:subject-category:access-subject" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="true"/>
</Apply>
</Condition>
</Rule>
<Rule Effect="Permit" RuleId="permit-action">
<Target>
<AnyOf>
<AllOf>
<Match MatchId="urn:oasis:names:tc:xacml:1.0:function:string-equal">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">ATA</AttributeValue>
<AttributeDesignator AttributeId="http://www.opm.gov/feddata/CountryOfCitizenship" Category="urn:oasis:names:tc:xacml:1.0:subject-category:access-subject" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="false"/>
</Match>
<Match MatchId="urn:oasis:names:tc:xacml:1.0:function:string-equal">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">query</AttributeValue>
<AttributeDesignator AttributeId="urn:oasis:names:tc:xacml:1.0:action:action-id" Category="urn:oasis:names:tc:xacml:3.0:attribute-category:action" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="false"/>
</Match>
</AllOf>
<AllOf>
<Match MatchId="urn:oasis:names:tc:xacml:1.0:function:string-equal">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">LocalSiteName</AttributeValue>
<AttributeDesignator AttributeId="urn:oasis:names:tc:xacml:1.0:action:action-id" Category="urn:oasis:names:tc:xacml:3.0:attribute-category:action" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="false"/>
</Match>
</AllOf>
</AnyOf>
</Target>
</Rule>
<Rule Effect="Deny" RuleId="deny-read"/>
</Policy>
Service Authorization
Allowed Query
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
<Request xmlns="urn:oasis:names:tc:xacml:3.0:core:schema:wd-17" ReturnPolicyIdList="false" CombinedDecision="false">
<Attributes Category="urn:oasis:names:tc:xacml:3.0:attribute-category:action">
<Attribute AttributeId="urn:oasis:names:tc:xacml:1.0:action:action-id" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">query</AttributeValue>
</Attribute>
</Attributes>
<Attributes Category="urn:oasis:names:tc:xacml:1.0:subject-category:access-subject">
<Attribute AttributeId="urn:oasis:names:tc:xacml:1.0:subject:subject-id" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">Test User</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">users</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">admin</AttributeValue>
</Attribute>
<Attribute AttributeId="SUBJECT_ACCESS" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">A</AttributeValue>
</Attribute>
<Attribute AttributeId="SUBJECT_ACCESS" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">B</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">testuser1</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">Test User</AttributeValue>
</Attribute>
<Attribute AttributeId="http://www.opm.gov/feddata/CountryOfCitizenship" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">ATA</AttributeValue>
</Attribute>
</Attributes>
</Request>
1
2
3
4
5
6
7
8
<Response xmlns="urn:oasis:names:tc:xacml:3.0:core:schema:wd-17">
<Result>
<Decision>Permit</Decision>
<Status>
<StatusCode Value="urn:oasis:names:tc:xacml:1.0:status:ok"/>
</Status>
</Result>
</Response>
Denied Query
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
<Request xmlns="urn:oasis:names:tc:xacml:3.0:core:schema:wd-17" ReturnPolicyIdList="false" CombinedDecision="false">
<Attributes Category="urn:oasis:names:tc:xacml:3.0:attribute-category:action">
<Attribute AttributeId="urn:oasis:names:tc:xacml:1.0:action:action-id" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">query</AttributeValue>
</Attribute>
</Attributes>
<Attributes Category="urn:oasis:names:tc:xacml:1.0:subject-category:access-subject">
<Attribute AttributeId="urn:oasis:names:tc:xacml:1.0:subject:subject-id" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">Test User USA</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">users</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">admin</AttributeValue>
</Attribute>
<Attribute AttributeId="SUBJECT_ACCESS" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">A</AttributeValue>
</Attribute>
<Attribute AttributeId="SUBJECT_ACCESS" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">B</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">testuser1</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">Test User</AttributeValue>
</Attribute>
<Attribute AttributeId="http://www.opm.gov/feddata/CountryOfCitizenship" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">USA</AttributeValue>
</Attribute>
</Attributes>
</Request>
1
2
3
4
5
6
7
8
<Response xmlns="urn:oasis:names:tc:xacml:3.0:core:schema:wd-17">
<Result>
<Decision>Deny</Decision>
<Status>
<StatusCode Value="urn:oasis:names:tc:xacml:1.0:status:ok"/>
</Status>
</Result>
</Response>
Metacard Authorization
Subject Permitted
All of the resource’s RESOURCE_ACCESS attributes were matched with the Subject’s SUBJECT_ACCESS attributes.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
<Request xmlns="urn:oasis:names:tc:xacml:3.0:core:schema:wd-17" ReturnPolicyIdList="false" CombinedDecision="false">
<Attributes Category="urn:oasis:names:tc:xacml:3.0:attribute-category:action">
<Attribute AttributeId="urn:oasis:names:tc:xacml:1.0:action:action-id" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">filter</AttributeValue>
</Attribute>
</Attributes>
<Attributes Category="urn:oasis:names:tc:xacml:1.0:subject-category:access-subject">
<Attribute AttributeId="urn:oasis:names:tc:xacml:1.0:subject:subject-id" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">Test User</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">users</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">admin</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">testuser1</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">Test User</AttributeValue>
</Attribute>
<Attribute AttributeId="SUBJECT_ACCESS" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">A</AttributeValue>
</Attribute>
<Attribute AttributeId="SUBJECT_ACCESS" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">B</AttributeValue>
</Attribute>
<Attribute AttributeId="http://www.opm.gov/feddata/CountryOfCitizenship" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">ATA</AttributeValue>
</Attribute>
</Attributes>
<Attributes Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource">
<Attribute AttributeId="RESOURCE_ACCESS" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">A</AttributeValue>
</Attribute>
</Attributes>
</Request>
1
2
3
4
5
6
7
8
<Response xmlns="urn:oasis:names:tc:xacml:3.0:core:schema:wd-17">
<Result>
<Decision>Deny</Decision>
<Status>
<StatusCode Value="urn:oasis:names:tc:xacml:1.0:status:ok"/>
</Status>
</Result>
</Response>
Subject Denied
The resource had an additional RESOURCE_ACCESS attribute 'C' that the subject did not have.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
<Request xmlns="urn:oasis:names:tc:xacml:3.0:core:schema:wd-17" ReturnPolicyIdList="false" CombinedDecision="false">
<Attributes Category="urn:oasis:names:tc:xacml:3.0:attribute-category:action">
<Attribute AttributeId="urn:oasis:names:tc:xacml:1.0:action:action-id" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">filter</AttributeValue>
</Attribute>
</Attributes>
<Attributes Category="urn:oasis:names:tc:xacml:1.0:subject-category:access-subject">
<Attribute AttributeId="urn:oasis:names:tc:xacml:1.0:subject:subject-id" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">Test User</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">users</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">admin</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">Test User</AttributeValue>
</Attribute>
<Attribute AttributeId="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">testuser1</AttributeValue>
</Attribute>
<Attribute AttributeId="SUBJECT_ACCESS" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">A</AttributeValue>
</Attribute>
<Attribute AttributeId="SUBJECT_ACCESS" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">B</AttributeValue>
</Attribute>
<Attribute AttributeId="http://www.opm.gov/feddata/CountryOfCitizenship" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">ATA</AttributeValue>
</Attribute>
</Attributes>
<Attributes Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource">
<Attribute AttributeId="RESOURCE_ACCESS" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">A</AttributeValue>
</Attribute>
<Attribute AttributeId="RESOURCE_ACCESS" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">B</AttributeValue>
</Attribute>
<Attribute AttributeId="RESOURCE_ACCESS" IncludeInResult="false">
<AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">C</AttributeValue>
</Attribute>
</Attributes>
</Request>
1
2
3
4
5
6
7
8
<Response xmlns="urn:oasis:names:tc:xacml:3.0:core:schema:wd-17">
<Result>
<Decision>Deny</Decision>
<Status>
<StatusCode Value="urn:oasis:names:tc:xacml:1.0:status:ok"/>
</Status>
</Result>
</Response>
Expansion Service
The Expansion Service and its corresponding expansion-related commands provide an easy way for developers to add expansion capabilities to DDF during user attributes and metadata card processing. In addition to these two defined uses of the expansion service, developers are free to utilize the service in their own implementations.
Each instance of the expansion service consists of a collection of rule sets. Each rule set consists of a key value and its associated set of rules. Callers of the expansion service provide a key and an original value to be expanded. The expansion service then looks up the set of rules for the specified key. The expansion service then cumulatively applies each of the rules in the set starting with the original value, with the resulting set of values being returned to the caller.
| Key (Attribute) | Rules (original → new) | |
|---|---|---|
key1 |
value1 |
replacement1 |
value2 |
replacement2 |
|
value3 |
replacement3 |
|
key2 |
value1 |
replacement1 |
value2 |
replacement2 |
|
The examples below use the following collection of rule sets:
| Key (Attribute) | Rules (original → new) | |
|---|---|---|
Location |
Goodyear |
Goodyear AZ |
AZ |
AZ USA |
|
CA |
CA USA |
|
Title |
VP-Sales |
VP-Sales VP Sales |
VP-Engineering |
VP-Engineering VP Engineering |
|
Note that the rules listed for each key are processed in order, so they may build upon each other, i.e., a new value from the new replacement string may be expanded by a subsequent rule.
Instances and Configuration
It is expected that multiple instances of the expansion service will be running at the same time. Each instance of the service defines a unique property that is useful for retrieving specific instances of the expansion service. The following table lists the two pre-defined instances used by DDF for expanding user attributes and metacard attributes respectively.
| Property Name | Value | Description |
|---|---|---|
mapping |
security.user.attribute.mapping |
This instance is configured with rules that expand the user’s attribute values for security checking. |
mapping |
security.metacard.attribute.mapping |
This instance is configured with rules that expand the metacard’s security attributes before comparing with the user’s attributes. |
Each instance of the expansion service can be configured using a configuration file. The configuration file can have three different types of lines:
* comments - any line prefixed with the # character is ignored as a comment (for readability, blank lines are also ignored)
* attribute separator - a line starting with separator= defines the attribute separator string.
* rule - all other lines are assumed to be rules defined in a string format <key>:<original value>:<new value>
The following configuration file defines the rules shown above in the example table (using the space as a separator):
# This defines the separator that will be used when the expansion string contains multiple # values - each will be separated by this string. The expanded string will be split at the # separator string and each resulting attributed added to the attribute set (duplicates are # suppressed). No value indicates the defualt value of ' ' (space). separator= # The following rules define the attribute expansion to be performed. The rules are of the # form: # <attribute name>:<original value>:<expanded value> # The rules are ordered, so replacements from the first rules may be found in the original # values of subsequent rules. Location:Goodyear:Goodyear AZ Location:AZ:AZ USA Location:CA:CA USA Title:VP-Sales:VP-Sales VP Sales Title:VP-Engineering:VP-Engineering VP Engineering
Expansion Commands
| Title | Namespace | Description |
|---|---|---|
DDF::Security::Expansion::Commands |
security |
The expansion commands provide detailed information about the expansion rules in place and the ability to see the results of expanding specific values against the active rule set. |
Expansion Commands
security:expand |
security:expansions |
Command Descriptions
| Command | Description |
|---|---|
expand |
Runs the expansion service on the provided data returning the expanded value. |
expansions |
Dumps the ruleset for each active expansion service. |
Expansion Command Examples and Explanation
security:expansions
The security:expansions command dumps the ruleset for each active expansion service. It takes no arguments and displays each rule on a separate line in the form: <attribute name> : <original string> : <expanded string>. The following example shows the results of executing the expansions command with no active expansion service.
ddf@local>security:expansions
No expansion services currently available.
After installing the expansions service and configuring it with an appropriate set of rules, the expansions command will provide output similar to the following:
ddf@local>security:expansions
Location : Goodyear : Goodyear AZ
Location : AZ : AZ USA
Location : CA : CA USA
Title : VP-Sales : VP-Sales VP Sales
Title : VP-Engineering : VP-Engineering VP Engineering
security:expand
The security:expand command runs the expansion service on the provided data. It takes an attribute and an original value, expands the original value using the current expansion service and rule set and dumps the results. For the rule set shown above, the expand command produces the following results:
ddf@local>security:expand Location Goodyear
[Goodyear, USA, AZ]
ddf@local>security:expand Title VP-Engineering
[VP-Engineering, Engineering, VP]
ddf@local>expand Title "VP-Engineering Manager"
[VP-Engineering, Engineering, VP, Manager]
Securing SOAP
SOAP Secure Client
When calling to an endpoint from a SOAP secure client, it first requests the WSDL from the endpoint and the SOAP endpoint returns the WSDL. The client then calls to STS for authentication token to proceed. If the client receives the token, it makes a secure call to the endpoint and receives results.
Dumb SOAP Client
If calling endpoint from a non-secure client, at the point the of the initial call, the Anonymous Interceptor catches the request and prepares it to be accepted by the endpoint.
First, the interceptor reads the configured policy, builds a security header, and gets an anonymous SAML assertion. Using this, it makes a getSubject call which is sent through Shiro to the STS realm. Upon success, the STS realm returns the subject and the call is made to the endpoint.
Developing DDF Applications
The DDF applications are comprised of components, packaged as Karaf features, which are collections of OSGi bundles. These features can be installed/uninstalled using the Web Console or command line console. DDF applications also consist of one or more OSGi bundles and, possibly, supplemental external files. These applications are packaged as Karaf KAR files for easy download and installation. These applications can be stored on a file system or a Maven repository.
A KAR file is a Karaf-specific archive format (Karaf ARchive). It is a jar file that contains a feature descriptor file and one or more OSGi bundle jar files. The feature descriptor file identifies the application’s name, the set of bundles that need to be installed, and any dependencies on other features that may need to be installed.
Creating a KAR File
The recommended method for creating a KAR file is to use the features-maven-plugin, which has a create-kar goal (available as of Karaf v2.2.5, which DDF 2.X is based upon). This goal reads all of the features specified in the features descriptor file. For each feature in this file, it resolves the bundles defined in the feature. All bundles are then packaged into the KAR archive.
An example of using the create-kar goal is shown below:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
<plugin>
<groupId>org.apache.karaf.tooling</groupId>
<artifactId>features-maven-plugin</artifactId>
<version>2.2.5</version>
<executions>
<execution>
<id>create-kar</id>
<goals>
<goal>create-kar</goal>
</goals>
<configuration>
<descriptors>
<!-- Add any other <descriptor> that the features file may reference here -->
</descriptors>
<!--
Workaround to prevent the target/classes/features.xml file from being included in the
kar file since features.xml already included in kar's repository directory tree.
Otherwise, features.xml would appear twice in the kar file, hence installing the
same feature twice.
Refer to Karaf forum posting at http://karaf.922171.n3.nabble.com/Duplicate-feature-repository-entry-using-archive-kar-to-build-deployable-applications-td3650850.html
-->
<resourcesDir>/Users/jlcsmith/source/2.8.x/ddf/distribution/docs/target/doesNotExist</resourcesDir>
<!--
Location of the features.xml file. If it references properties that need to be filtered, e.g., 2.8.2, it will need to be
filtered by the maven-resources-plugin.
-->
<featuresFile>/Users/jlcsmith/source/2.8.x/ddf/distribution/docs/target/classes/features.xml</featuresFile>
<!-- Name of the kar file (.kar extension added by default). If not specified, defaults to docs-2.8.2 -->
<finalName>ddf-ifis-2.8.2</finalName>
</configuration>
</execution>
</executions>
</plugin>
Examples of how KAR files are created for DDF components can be found in the DDF source code under the ddf/distribution/ddf-kars directory.
The .kar file generated should be deployed to the application author’s maven repository. The URL to the application’s KAR file in this Maven repository should be the installation URL that is used.
Including Data Files in a KAR File
The developer may need to include data or configuration file(s) in a KAR file. An example of this is a properties file for the JDBC connection properties of a catalog provider.
It is recommended that: * Any data/configuration files be placed under the src/main/resources directory of the maven project. Sub-directories under src/main/resources can be used, e.g., etc/security. * The Maven project’s pom file should be updated to attach each data/configuration file as an artifact (using the build-helper-maven-plugin). * Add each data/configuration file to the KAR file using the <configfile> tag in the KAR’s features.xml file.
Installing a KAR File
When the user downloads an application by clicking on the Installation link, the application’s KAR file is downloaded. This KAR file should be placed in the <DDF_INSTALL_DIR>/deploy directory of the running DDF instance. DDF then detects that a file with a .kar file extension has been placed in this monitored directory, unzips the KAR file into the <DDF_INSTALL_DIR>/system directory, and installs the bundle(s) listed in the KAR file’s feature descriptor file. The user can then go to the Web Console’s Features tab and verify the new feature(s) is installed.
OGC Filter
OGC Filter
An OGC Filter is a Open Geospatial Consortium (OGC) standard that describes a query expression in terms of XML and Key-Value Pairs (KVP).
DDF originally had a custom query representation that some found difficult to understand and implement. In switching to a well-known standard like the OGC Filter, developers benefit from various third party products and third party documentation, as well as any previous experience with the standard. The OGC Filter is used to represent a query to be sent to sources and the Catalog Provider, as well as to represent a Subscription. The OGC Filter provides support for expression processing, such as adding or dividing expressions in a query, but that is not the intended use for DDF.
OGC filter in the DDF Catalog
The DDF Catalog Framework uses the implementation provided by Geotools, which provides a Java representation of the standard.
Geotools originally provided standard Java classes for the OGC Filter Encoding 1.0, under the package name org.opengis.filter, which is where org.opengis.filter.Filter is located. Java developers should use the Java objects exclusively to complete query tasks, rather than parsing or viewing the XML representation.
Utilities
Each DDF Application is located in its own code repository. In addition to the applications, there are other utilities that are available in other code repositories on Github. These utilities are deployed into Nexus for easier accessibility.
DDF-libs
DDF-libs is a repository for library modules in DDF. Typically the modules in this repository are re-usable across different components of DDF.
DDF Load Balancer
Provides utility that allows incoming traffic to be distributed over multiple instances of DDF, via HTTP or HTTPS
Contained within DDF is a Load Balancer utility that allows incoming traffic to be distributed over multiple instances of DDF. The DDF Load Balancer supports two protocols: HTTP and HTTPS. The Load Balancer can be configured to run one protocol or both at the same time. The DDF Load Balancer has been configured to utilize a "Round Robin" algorithm to distribute transactions. The load balancer is also equipped with a failover mechanism. When the load balancer attempts to access a server that is non-functional, it will receive an exception and move on to the next server on the list to complete the transaction. The action will try to be replayed on every server once before failing back to the client.
Set up the DDF Load Balancer
The main method for installing the DDF Load Balancer is to install the application (.kar) file into the hot deploy folder of a full DDF distribution.
Prerequisites
Before the DDF Load Balancer can be installed:
-
the DDF Kernel must be running
-
the DDF Platform Application must be installed
Install
Complete the following procedure to install the DDF Load Balancer.
-
Download the application (.kar) file from the artifacts repo (http://artifacts.codice.org/).
-
Copy the KAR file into the <INSTALL_DIRECTORY>/deploy folder of a currently running DDF distribution.
-
Uninstall the included jetty feature. Note: This is due to a bug in the current version of jetty being delivered with DDF, this step will be removed once that version is updated.
features:uninstall jetty
Verify
-
Verify all of the Load Balancer’s appropriate features have been successfully installed.
DDF Load Balancer installed featuresddf@local>features:list | grep -i loadbalancer-app [installed ] [1.0.0 ] codice-load-balancer loadbalancer-app-1.0.0 Load Balancer [installed ] [7.6.12.v20130726 ] loadbalancer-jetty loadbalancer-app-1.0.0 Provide Jetty engine support
-
Verify the DDF Load Balancer bundles are Active.
DDF Load Balancer active bundles[ 223] [Active ] [ ] [ ] [ 50] camel-http (2.12.1) [ 224] [Active ] [ ] [ ] [ 50] camel-jetty (2.12.1) [ 258] [Active ] [ ] [ ] [ 80] Jetty :: Utilities (7.6.12.v20130726) [ 259] [Active ] [ ] [ ] [ 80] Jetty :: IO Utility (7.6.12.v20130726) [ 260] [Active ] [ ] [ ] [ 80] Jetty :: Http Utility (7.6.12.v20130726) [ 261] [Active ] [ ] [ ] [ 80] Jetty :: Asynchronous HTTP Client (7.6.12.v20130726) [ 262] [Active ] [ ] [ ] [ 80] Jetty :: Continuation (7.6.12.v20130726) [ 263] [Active ] [ ] [ ] [ 80] Jetty :: JMX Management (7.6.12.v20130726) [ 264] [Active ] [ ] [ ] [ 80] Jetty :: Server Core (7.6.12.v20130726) [ 265] [Active ] [ ] [ ] [ 80] Jetty :: Security (7.6.12.v20130726) [ 266] [Active ] [ ] [ ] [ 80] Jetty :: Servlet Handling (7.6.12.v20130726) [ 267] [Active ] [ ] [ ] [ 80] Jetty :: Utility Servlets and Filters (7.6.12.v20130726) [ 268] [Active ] [ ] [ ] [ 80] Jetty :: XML utilities (7.6.12.v20130726) [ 269] [Active ] [ ] [ ] [ 80] Jetty :: Webapp Application Support (7.6.12.v20130726) [ 270] [Active ] [ ] [ ] [ 80] Jetty :: JNDI Naming (7.6.12.v20130726) [ 271] [Active ] [ ] [ ] [ 80] Jetty :: Plus (7.6.12.v20130726) [ 272] [Active ] [ ] [ ] [ 80] Jetty :: Websocket (7.6.12.v20130726) [ 273] [Active ] [Created ] [ ] [ 80] Codice :: Loadbalancer :: Camel (1.0.0)
Uninstall
|
It is very important to save the KAR file for the application prior to an uninstall so that the uninstall can be reverted if necessary. |
Complete the following procedure to uninstall the DDF Load Balancer.
-
Delete the KAR file (loadbalancer-app-X.Y.kar) from the <INSTALL_DIRECTORY>/deploy directory.
-
Re-install the jetty feature.
features:install jetty
-
Restart DDF to ensure that all of the Jetty bundles are refreshed properly.
Configure the DDF Load Balancer
The DDF Load Balancer can be configured to allow multiple DDF nodes to be balanced. It can also be configured with a port on which to accept connections. Configurations differ slightly between the HTTP- and HTTPS-based load balancers. All configurations are dynamic in that configuration settings are immediately applied, and it is not necessary to restart DDF.
Configure the HTTP Load Balancer
Complete the following procedure to access the load balancer configuration.
-
Click the Configuration tab in the DDF management console.
-
Scroll down to the configuration entry that is labeled Platform HTTP Load Balancer. The configuration for the load balancer contains two fields: Load Balancer Port and IP Address and Port.
-
In the Load Balancer Port field, enter the port number to be accessed when reaching other systems.
-
In the IP Address and Port field, enter a comma-delimited list of IP addresses and ports for each DDF node to be balanced. The format for IP address and port is <IP_ADDRESS>:<PORT> (e.g., 192.168.1.123:8181,192.168.1.22:8181).
-
Select the Save button when all configuration have been added.
At this point, the load balancer is reset and ready to accept requests. These configurations can be updated at any time without starting the host DDF instance.
Configure the HTTPS Load Balancer
It is possible to run the HTTPS load balancer by itself or run it in parallel with the HTTP load balancer. The HTTPS load balancer utilizes the centralized SSL configurations within DDF, along with the load balancer configurations. Complete the following procedure to configure the HTTPS load balancer.
-
Find the SSL configurations and verify that the values are correct.
-
Click the Configuration tab in the DDF management console.
-
Scroll down to the configuration entry that is labeled Pax Web Runtime.
-
Select Pax Web Runtime.
-
Ensure that the displayed settings match what is configured in the DDF nodes to be balanced.
-
Save the updated settings or close the window, as applicable.
-
In the configuration table, scroll down to the configuration entry labeled Platform HTTPS Load Balancer.
-
The configuration for the load balancer contains three fields: Load Balancer Host, Load Balancer Port, and IP Address and Port.
-
In the Load Balancer Host field, enter the host name or IP address to be used for the host load balancer machine.
-
In the Load Balancer Port field, enter the port number to be accessed on the load balancer to reach the other systems.
-
In the IP Address and Port field, enter a comma delimited list of IP address and port for each DDF node that will be balanced. The format for IP address and port is <IP_ADDRESS>:<PORT> (e.g. 192.168.1.123:8993,192.168.1.22:8993)
-
Select the Save button when all configuration have been added.
Since SSL requests will be coming from a client into the load balancer, it is essential that the nodes being balanced have the same security policy and settings. The client has no idea which DDF server it will be connecting with behind the load balancer. The client is responsible for connecting securely with the load balancer, and the load balancer is responsible for connecting securely and consistently with all DDF nodes.
|
The DDF Load Balancer cannot run on the same port as the DDF Web Console or other web services. If you would like the load balancer to run on this port, change the web console port to a different port number. This configuration parameter can be found in the Pax Web Runtime configuration. |
DDF STOMP
The DDF STOMP application allows query subscription messages to be sent to the DDF server via STOMP protocol.
Subscription query messages are defined in JSON format following a defined schema. These messages allow for the management of subscriptions using create, time to live (TTL), update, and delete functions. Catalog queries within the message are defined in CQL format. Results are sent to a STOMP-based topic, which can be subscribed to via a STOMP-based client. Content results will be delivered over time as the subscription query matches incoming data published to the DDF catalog.
STOMP
STOMP is a streaming text-based messaging protocol that supports the delivery of messages as well as publish and subscribe. STOMP mimics HTTP and can utilize TCP-IP, thus making it compatible with many different programming languages. STOMP is very simple to implement and can easily be tested. For more information, refer to http://stomp.github.io/.
Common Query Language (CQL)
Common Query Language or CQL is a query language the the OGC has chosen for expressing data filtering. The power of CQL is its strong integration into GeoTools, which helps to represent complex queries as text strings. For more information, refer to http://docs.geotools.org/latest/userguide/library/cql/index.html
Publish and Subscribe Query Subscription Message
Subscriptions are stateful and will survive when the server is restarted. Subscription messages are specified in a JSON format. The schema section specifies the values that construct this message, along with the defaults. The sections below describe the meaning of the message values.
Subscription Identifier (subscriptionId)
The subscription identifier is a unique string that is provided by the query subscription requester. The preferred value should be a generated UUID. Example: “`subscriptionId” : "faf4e8493h389fh4398f3h0040`"
Action (action)
The action value tells the system what action should take place. The following actions are available:
CREATE: Creates a new subscription
UPDATE: Updates an existing subscription (requires subscriptionId)
DELETE: Deletes an existing subscription (requires subscritionId)
Example: "action" : "CREATE"
Subscription Time to Live (subscriptionTtlType and subscriptionTtl)
A time to live value can be set on a subscription by providing values for the TTL type and TTL. The subscriptionTtlType value describes the type of TTL that will be used. The following values are available for subscriptionTtlType:
-
MILLISECONDS -
SECONDS -
MINUTES -
HOURS -
MONTHS -
YEARS
The subscriptionTtl value is an integer value that specifies the quantity of the subscriptionTtlType. For example, a value of 60 for subscriptionTtl along with a value of 'HOURS' for subscriptionTtlType tells the system that the subscription should last for 60 hours. If a value of 0, -9, or no value is set for subscriptionTtl, the time to live will be infinite. Example: “subscriptionTtlType” : “HOURS”, “subscriptionTtl” : 90
Query String (queryString)
The query string allows the user to specify a query that can filter targeted results. Query string values are specified in CQL format. See the CQL section for more information on using this format. Example: “queryString” : “anyText LIKE 'Red Truck'”
Sources (sources)
Source targets can be specified in the message. Sources will be passed on to all queries to retrieve the correct results from all of the correct sources. The sources value is an array.
Example: "sources" : [ "source1", "source2", "source3" ]
Creation Date and Last Modified Date (creationDate and lastModifiedDate)
The creation date and last modified date are values that are written into the subscription by the system. The creation date specifies the date and time that the subscription was created in the system. The last modified date specifies the date and time of the last change to the subscription. The user can specify these values in the subscription message, but it will be ignored.
DDF STOMP Setup
DDF STOMP can be found at https://github.com/codice/ddf-stomp. When building DDF STOMP a KAR application file is provided. This file can be added to the DDF server by placing the file in the deploy folder. Once the application has been deployed it is best to restart DDF.
Configuration
DDF STOMP has a default configuration that it utilizes. By default, the STOMP server within DDF STOMP runs on port 61613. To change this port number, modifications must be made to activemq.xml and the DDF configuration. The activemq.xml file can be found in the etc/ directory of DDF. Open the file and navigate to the bottom of the page. An entry for transportConnector reads stomp://0.0.0.0:61613. The port number at the end can be changed to the desired choice. Once chosen, save the file and exit. The second half of configuring the port number and other options can be found in the DDF configuration web console. Upon selecting the Configurations tab, look for the configuration named Publish Subscribe Subscription Query Service. This configuration has the following options:
- Destination Topic Name
-
The topic destination where query subscription messages are sent.
- Subscription Topic Name
-
The prefix of the topic where subscription results are sent.
- STOMP Host
-
The host name of the STOMP server.
- STOMP Port
-
The port number of the STOMP server.
- Default Max Results
-
The maximum number of results to return from a given query.
- Default Request Timeout
-
The maximum time to wait before a request is timed out.
- Transformer ID
-
ID that specifies the format in which results are produced (default: geojson). See Extending Catalog Transformers for more information. Once these configurations have been made, it is best to restart the DDF server.
Send a Subscription Message
Subscription messages can be sent to the system using a STOMP-based client. STOMP is a standardized publish subscribe messaging protocol that utilizes the HTTP protocol for sending data. Connections to the system are asynchronous. For more information, refer to the section on STOMP. To connect the STOMP client to the system, the following information is typically required: STOMP server host, STOMP port, user name, password, and topic name. The username, password, port, and topic name can be found in the publish subscribe query subscriptions configuration. Refer to the Configuration section for more information.
The Gozirra STOMP client (http://www.germane-software.com/software/Java/Gozirra/) and the Fuse Source STOMP client (https://github.com/fusesource/stompjms) have both been used successfully to test functionality.
Subscribe or Results
A STOMP-based client can be used to retrieve results from a query subscription. STOMP is a standardized publish subscribe messaging protocol that utilizes the HTTP protocol for receiving data. Connections to the system are asynchronous. For more information, refer to the section on STOMP. To connect the STOMP client to the system, the following information is typically required: STOMP server host, STOMP port, user name, password, and topic name. The username, password, port, and topic name can be found in the publish subscribe query subscriptions configuration. Refer to the Configuration section for more information. The topic name for subscription results, which is found in the configuration, is a partial name. The end of the topic name will include the subscription id. For example, if the partial topic name was “/topic/result/”, and the subscription ID for the subscription was “faf4e8493h389fh4398f3h0040”, the actual topic name that would be subscribed to is: “/topic/result/faf4e8493h389fh4398f3h0040”. This is the full topic name that would be used with your STOMP client to retrieve results.
The Gozirra STOMP client (http://www.germane-software.com/software/Java/Gozirra/) and the Fuse Source STOMP client (https://github.com/fusesource/stompjms) have both been used sucessfully to test functionality.
Returned Messages
Return messages will be delivered back to a subscribing STOMP client. All delivered messages are returned in GeoJSON format. When a subscription is first submitted, an initial query is executed on existing data in the catalog. All results based on this query are immediately returned to the subscribing STOMP client via the defined topic. As content is added to the catalog (when these items match the query in the subscription), the content items are immediately returned to the subscribing STOMP client via the defined topic. Examples Create a new subscription that will last for 90 hours and searches for any text that is matching red car:
1
2
3
4
5
6
7
{
"subscriptionId" : "faf4e8493h389fh4398f3h0060",
"action" : "CREATE",
"subscriptionTtlType" : "HOURS",
"subscriptionTtl" : 90,
"queryString" : "anyText LIKE 'red car'"
}
Update a previous subscription to last for three months instead of 90 hours:
1
2
3
4
5
6
7
{
"subscriptionId" : "faf4e8493h389fh4398f3h0060",
"action" : "UPDATE",
"subscriptionTtlType" : "MONTHS",
"subscriptionTtl" : 3,
"queryString" : "anyText LIKE 'red car'"
}
Delete the previous subscription:
1
2
3
4
{
"subscriptionId" : "faf4e8493h389fh4398f3h0060",
"action" : "DELETE"
}
Architecture
The API is set up to utilize STOMP as the protocol for receiving subscription management messages. External third party applications will utilize STOMP to send commands for subscription management and delivery.
Subscription Message Schema
The messages sent over STOMP will be in JSON format. The messages used for subscription management utilize the following schema:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
{
"type":"object",
"$schema": "http://json-schema.org/draft-03/schema",
"id": "http://jsonschema.net",
"required":false,
"properties":{
"queryString": {
"type":"string",
"id": "http://jsonschema.net/searchPhrase",
"required":true
},
"sources": {
"type":"array",
"id": "http://jsonschema.net/sources",
"required":false,
"items":
{
"type":"string",
"id": "http://jsonschema.net/sources/0",
"required":false
}
},
"subscriptionId": {
"type":"string",
"id": "http://jsonschema.net/subscriptionId",
"required":true
},
"action": {
"type":"string",
"id": "http://jsonschema.net/subscriptionId",
"required":true
},
"subscriptionTtl": {
"type":"number",
"id": "http://jsonschema.net/subscriptionId",
"required":false
},
"subscriptionTtlType": {
"type":"number",
"id": "http://jsonschema.net/subscriptionId",
"required":false
},
"creationDate": {
"type":"number",
"id": "http://jsonschema.net/subscriptionId",
"required":false
},
"lastModifiedDate": {
"type":"number",
"id": "http://jsonschema.net/subscriptionId",
"required":false
},
}
}
Overview
The administrative application enhances administrative capabilities when installing and managing DDF. It contains various services and interfaces that allow administrators more control over their systems.
This guide supports developers creating extensions of the existing framework.
Overview
The DDF Catalog provides a framework for storing, searching, processing, and transforming information. Clients typically perform query, create, read, update, and delete (QCRUD) operations against the Catalog. At the core of the Catalog functionality is the Catalog Framework, which routes all requests and responses through the system, invoking additional processing per the system configuration.
This guide supports developers creating extensions of the existing framework.
Whitelist
The following packages have been exported by the DDF Catalog application and are approved for use by third parties:
-
ddf.catalog
-
ddf.catalog.util
-
ddf.catalog.event
-
ddf.catalog
-
ddf.catalog.validation
-
ddf.catalog.source
-
ddf.catalog.filter
-
ddf.catalog.federation
-
ddf.catalog.plugin
-
ddf.catalog.operation
-
ddf.catalog.transform
-
ddf.catalog.data
-
ddf.catalog.resource
-
ddf.measure
-
ddf.catalog.filter.delegate
-
ddf.catalog.impl.filter
-
ddf.services.schematron
-
ddf.geo.formatter
Catalog Application Services
As an OSGi system, DDF does Intra-module conversations via services. The following summarizes DDF internal services within the Catalog application.
Catalog Framework
The CatalogFramework is the routing mechanism between catalog components that provides integration points for the Catalog Plugins. An endpoint invokes the active Catalog Framework, which calls any configured Pre-query or Pre-ingest plug-ins. The selected federation strategy calls the active Catalog Provider and any connected or federated sources. Then, any Post-query or Post-ingest plug-ins are invoked. Finally, the appropriate response is returned to the calling endpoint.
Sources
A source is a system consisting of a catalog containing Metacards.
CatalogProvider
The Catalog Provider is an API is used to interact with data providers, such as files systems or databases, to query, create, update, or delete data. The provider also translates between DDF objects and native data formats.
ConnectedSource
A Connected Source is a local or remote source that is always included in every local and enterprise query, but is hidden from being queried individually.
FederatedSource
A Federated Source is a remote source that can be optionally included or excluded from queries.
Plugins
Plugins are additional tools to use to add additional business logic at certain points, depending on the type of plugin. Plugins can be designed to run before or after certain processes. They are often used for validation, optimization, or logging.
"Pre-" Plugins
These plugins are executed before an action is taken.
| Plugin | Description |
|---|---|
Pre-IngestPlugin |
Performs any changes to a resource prior to ingesting it. |
Pre-Query Plugin |
Performs any changes to query before executing. |
Pre-Resource Plugin |
Performs any changes to a resource associated with a metacard prior to download. |
Pre-Subscription Plugin |
Performs any changes before creating a subscription. |
Pre-Delivery Plugin |
Performs any changes before delivered a subscribed event. |
“Post-“ Plugins
| Plugin | Description |
|---|---|
Post-Ingest Plugin |
Performs actions after ingest is completed. |
Post-Query Plugin |
Performs any changes to response after query completes. |
Post-Get Resource Plugin |
performs any changes to a resource after download |
Transformers
Transformers are used to alter the format of a resource or its metadata to or from the catalog’s metacard format
| Transformer | Description |
|---|---|
Input Transformers |
create metacards from input. |
Metacard Transformers |
translates a metacard from catalog metadata to a specific data format. |
Query Response Transformers |
translates a list of Result objects to a desired format. |
Catalog Development Fundamentals
This section introduces the fundamentals of working with the Catalog API the OGC Filter for Queries.
Simple Catalog API Implementations
The Catalog API implementations, which are denoted with the suffix of
Impl
on the Java file names, have multiple purposes and uses.
-
First, they provide a good starting point for other developers to extend functionality in the framework. For instance, extending the
MetacardImplallows developers to focus less on the inner workings of DDF and more on the developer’s intended purposes and objectives. -
Second, the Catalog API Implementations display the proper usage of an interface and an interface’s intentions. Also, they are good code examples for future implementations. If a developer does not want to extend the simple implementations, the developer can at least have a working code reference to base future development.
Use of the Whiteboard Design Pattern
The DDF Catalog makes extensive use of the Whiteboard Design Pattern. Catalog Components are registered as services in the OSGi Service Registry, and the Catalog Framework or any other clients tracking the OSGi Service Registry are automatically notified by the OSGi Framework of additions and removals of relevant services.
The Whiteboard Design Pattern is a common OSGi technique that is derived from a technical whitepaper provided by the OSGi Alliance in 2004. It is recommended to use the Whiteboard pattern over the Listener pattern in OSGi because it provides less complexity in code (both on the client and server sides), fewer deadlock possibilities than the Listener pattern, and closely models the intended usage of the OSGi framework.
Working with Queries
Clients use ddf.catalog.operation.Query objects to describe which metacards are needed from Sources. Query objects have two major components:
-
Filter
-
Query Options
A Source uses the Filter criteria constraints to find the requested set of metacards within its domain of metacards. The Query Options are used to further restrict the Filter’s set of requested metacards. See the Creating Filters section for more on Filters.
Query Options
| Option | Description |
|---|---|
StartIndex |
1-based index that states which metacard the Source should return first out of the requested metacards. |
PageSize |
Represents the maximum amount of metacards the Source should return. |
SortBy |
Determines how the results are sorted and on which property. |
RequestsTotalResultsCount |
Determines whether the total number of results should be returned. |
TimeoutMillis |
The amount of time in milliseconds before the query is to be abandoned. |
Creating a query
The easiest way to create a Query is to use ddf.catalog.operation.QueryImpl object. It is first necessary to create an OGC Filter object then set the Query Options after QueryImpl has been constructed.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
/*
Builds a query that requests a total results count and
that the first record to be returned is the second record found from
the requested set of metacards.
*/
String property = ...;
String value = ...;
org.geotools.filter.FilterFactoryImpl filterFactory = new FilterFactoryImpl() ;
QueryImpl query = new QueryImpl( filterFactory.equals(filterFactory.property(property),
filterFactory.literal(value))) ;
query.setStartIndex(2) ;
query.setRequestsTotalResultsCount(true);
Evaluating a query
Every Source must be able to evaluate a Query object. Nevertheless, each Source could evaluate the Query differently depending on what that Source supports as to properties and query capabilities. For instance, a common property all Sources understand is id, but a Source could possibly store frequency values under the property name "frequency." Some Sources may not support frequency property inquiries and will throw an error stating it cannot interpret the property. In addition, some Sources might be able to handle spatial operations, while others might not. A developer should consult a Source’s documentation for the limitations, capabilities, and properties that a Source can support.
Working with Filters
An OGC Filter is a Open Geospatial Consortium (OGC) standard (
http://www.opengeospatial.org/standards/filter
) that describes a query expression
in terms of Extensible Markup Language (XML) and key-value pairs (KVP). The DDF Catalog Framework does not use the XML representation of
the OGC Filter standard. DDF instead utilizes the Java implementation provided by Geotools (
http://geotools.org/
). Geotools provides Java
equivalent classes for OGC Filter XML elements. Geotools originally provided the standard Java classes for the OGC Filter Encoding 1.0 under the
package name
org.opengis.filter. The same package name is used today and is currently used by DDF. Java developers do not parse or
view the XML representation of a
Filter
in DDF. Instead, developers use only the Java objects to complete query tasks.
Note that the ddf.catalog.operation.Query interface extends the org.opengis.filter.Filter interface, which means that a Query object is an OGC Java Filter with Query Options.
public interface Query extends Filter
Using Filters
FilterBuilder API
To abstract developers from the complexities of working with the Filter interface directly and implementing the DDF Profile of the Filter specification, the DDF Catalog includes an API, primarily in ddf.filter, to build Filters using a fluent API.
To use the FilterBuilder API, an instance of ddf.filter.FilterBuilder should be used via the OSGi registry. Typically, this will be injected via a dependency injection framework. Once an instance of FilterBuilder is available, methods can be called to create and combine Filters.
|
The fluent API is best accessed using an IDE that supports code-completion. For additional details, refer to the Catalog API Javadoc. |
Boolean Operators
FilterBuilder.allOf(Filter …)-
creates a new Filter that requires all provided Filters are satisfied (Boolean AND), either from a List or Array of Filter instances.
FilterBuilder.anyOf(Filter …)-
creates a new Filter that requires all provided Filters are satisfied (Boolean OR), either from a List or Array of Filter instances.
FilterBuilder.not(Filter filter)-
creates a new Filter that requires the provided Filter must not be match (Boolean NOT).
Attribute
FilterBuilder.attribute(String attributeName) begins a fluent API for creating an Attribute-based Filter, i.e., a Filter that matches on Metacards with Attributes of a particular value.
XPath
FilterBuilder.xpath(String xpath) begins a fluent API for creating an XPath-based Filter, i.e., a Filter that matches on Metacards with Attributes of type XML that match when evaluating a provided XPath selector.
Contextual Operators
1
2
3
FilterBuilder.attribute(attributeName).is().like().text(String contextualSearchPhrase);
FilterBuilder.attribute(attributeName).is().like().caseSensitiveText(StringcaseSensitiveContextualSearchPhrase);
FilterBuilder.attribute(attributeName).is().like().fuzzyText(String fuzzySearchPhrase);
Directly Implementing the Filter (Advanced)
|
Implementing the Filter interface directly is only for extremely advanced use cases and is highly discouraged. Instead, use of the DDF-specific FilterBuilder API is recommended. |
Developers create a Filter object in order to filter or constrain the amount of records returned from a Source. The OGC Filter Specification has several types of filters that can be combined in a tree-like structure to describe the set of metacards that should be returned.
Categories of Filters
-
Comparison Operators
-
Logical Operators
-
Expressions
-
Literals
-
Functions
-
Spatial Operators
-
Temporal Operators
Units of Measure
According to the OGC Filter Specifications http://www.opengeospatial.org/standards/filter[09-026r1] and 04-095, units of measure can be expressed as a URI. To fulfill that requirement, DDF utilizes the Geotools class org.geotools.styling.UomOgcMapping for spatial filters requiring a standard for units of measure for scalar distances. Essentially, the
UomOgcMapping
maps the OGC Symbology Encoding (
http://www.opengeospatial.org/standards/symbol
) standard URIs to Java Units. This class provides three options for units of measure:
-
FOOT
-
METRE
-
PIXEL
DDF only supports FOOT and METRE since they are the most applicable to scalar distances.
Creating Filters
The common way to create a Filter is to use the Geotools FilterFactoryImpl object, which provides Java implementations for the various types of filters in the Filter Specification. Examples are the easiest way to understand how to properly create a Filter and a Query.
|
Refer to the Geotools javadocz for more information on |
The example below illustrates creating a query, and thus an OGC Filter, that does a case-insensitive search for the phrase "mission" in the entire metacard’s text. Note that the OGC PropertyIsLike Filter is used for this simple contextual query.
Example Creating-Filters-1
1
2
3
4
5
6
7
8
9
10
11
12
org.opengis.filter.FilterFactory filterFactory = new FilterFactoryImpl() ;
boolean isCaseSensitive = false ;
String wildcardChar = "*" ; // used to match zero or more characters
String singleChar = "?" ; // used to match exactly one character
String escapeChar = "\\" ; // used to escape the meaning of the wildCard, singleChar,
and the escapeChar itself
String searchPhrase = "mission" ;
org.opengis.filter.Filter propertyIsLikeFilter =
filterFactory.like(filterFactory.property(Metacard.ANY_TEXT), searchPhrase, wildcardChar, singleChar, escapeChar, isCaseSensitive);
ddf.catalog.operation.QueryImpl query = new QueryImpl( propertyIsLikeFilter );
The example below illustrates creating an absolute temporal query, meaning the query is searching for Metacards whose modified timestamp occurred during a specific time range. Note that this query uses the During OGC Filter for an absolute temporal query.
Example Creating-Filters-2
1
2
3
4
5
6
7
8
9
10
11
12
org.opengis.filter.FilterFactory filterFactory = new FilterFactoryImpl() ;
org.opengis.temporal.Instant startInstant = new org.geotools.temporal.object.DefaultInstant(new DefaultPosition(start));
org.opengis.temporal.Instant endInstant = new org.geotools.temporal.object.DefaultInstant(new DefaultPosition(end));
org.opengis.temporal.Period period = new org.geotools.temporal.object.DefaultPeriod(startInstant, endInstant);
String property = Metacard.MODIFIED ; // modified date of a metacard
org.opengis.filter.Filter filter = filterFactory.during( filterFactory.property(property), filterFactory.literal(period) );
ddf.catalog.operation.QueryImpl query = new QueryImpl(filter) ;
Contextual Searches
Most contextual searches can be expressed using the PropertyIsLike filter. The special haracters that have meaning in a PropertyIsLike filter are the wildcard, single wildcard, and escape characters (see Example Creating-Filters-1).
PropertyIsLike Special Characters
| Character | Description |
|---|---|
Wildcard |
Matches zero or more characters. |
Single Wildcard |
Matches exactly one character. |
Escape |
Escapes the meaning of the Wildcard, Single Wildcard, and the Escape character itself |
Characters and words, such as AND, &, and, OR, |, or, NOT, ~, not, {, and }, are treated as literals in a PropertyIsLike filter. In order to create equivalent logical queries, a developer must instead use the Logical Operator filters {AND, OR, NOT}. The Logical Operator filters can be combined together with PropertyIsLike filters to create a tree that represents the search phrase expression.
Example Creating-Filters-3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
org.opengis.filter.FilterFactory filterFactory = new FilterFactoryImpl() ;
boolean isCaseSensitive = false ;
String wildcardChar = "*" ; // used to match zero or more characters
String singleChar = "?" ; // used to match exactly one character
String escapeChar = "\\" ; // used to escape the meaning of the wildCard, singleChar, and the escapeChar itself
Filter filter =
filterFactory.and(
filterFactory.like(filterFactory.property(Metacard.METADATA), "mission" ,
wildcardChar, singleChar, escapeChar, isCaseSensitive),
filterFactory.like(filterFactory.property(Metacard.METADATA), "planning" ,
wildcardChar, singleChar, escapeChar, isCaseSensitive)
);
ddf.catalog.operation.QueryImpl query = new QueryImpl( filter );
Tree View of Example Creating-Filters-3
Filters used in DDF can always be represented in a tree diagram.
XML View of Example Creating-Filters-3
Another way to view this type of Filter is through an XML model, which is shown below.
1
2
3
4
5
6
7
8
9
10
11
12
<Filter>
<And>
<PropertyIsLike wildCard="*" singleChar="?" escapeChar="\">
<PropertyName>metadata</PropertyName>
<Literal>mission</Literal>
</PropertyIsLike>
<PropertyIsLike wildCard="*" singleChar="?" escapeChar="\">
<PropertyName>metadata</PropertyName>
<Literal>planning</Literal>
</PropertyIsLike>
<And>
</Filter>
Using the Logical Operators and
PropertyIsLike
filters, a developer can create a whole language of search phrase expressions.
Fuzzy Operation
DDF only supports one custom function. The Filter specification does not include a fuzzy operator, so a Filter function was created to represent a fuzzy operation. The function and class is called FuzzyFunction, which is used by clients to notify the Sources to perform a fuzzy search. The syntax expected by providers is similar to the Fuzzy Function. Refer to the example below.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
String wildcardChar = "*" ; // used to match zero or more characters
String singleChar = "?" ; // used to match exactly one character
String escapeChar = "\\" ; // used to escape the meaning of the wildCard, singleChar
boolean isCaseSensitive = false ;
Filter fuzzyFilter = filterFactory.like(
new ddf.catalog.impl.filter.FuzzyFunction(
Arrays.asList((Expression) (filterFactory.property(Metacard.ANY_TEXT))),
filterFactory.literal("")),
searchPhrase,
wildcardChar,
singleChar,
escapeChar,
isCaseSensitive);
QueryImpl query = new QueryImpl(fuzzyFilter);
Parsing Filters
According to the OGC Filter Specification (04-095: http://www.opengeospatial.org/standards/filter), a "(filter expression) representation can be…parsed and then transformed into whatever target language is required to retrieve or modify object instances stored in some persistent object store." Filters can be thought of as the WHERE clause for a SQL SELECT statement to "fetch data stored in a SQL-based relational database."
Sources can parse OGC Filters using the FilterAdapter and FilterDelegate. See Developing a Filter Delegate for more details on implementing a new FilterDelegate. This is the preferred way to handle OGC Filters in a consistent manner.
Alternately, org.opengis.filter.Filter implementations can be parsed using implementations of the interface org.opengis.filter.FilterVisitor. The FilterVisitor uses the Visitor pattern (http://www.oodesign.com/visitor-pattern.html). Essentially, FilterVisitor instances "visit" each part of the Filter tree allowing developers to implement logic to handle the filter’s operations. Geotools 8 includes implementations of the FilterVisitor interface. The DefaultFilterVisitor, as an example, provides only business logic to visit every node in the Filter tree. The DefaultFilterVisitor methods are meant to be overwritten with the correct business logic. The simplest approach when using FilterVisitor instances is to build the appropriate query syntax for a target language as each part of the Filter is visited. For instance, when given an incoming Filter object to be evaluated against a RDBMS, a CatalogProvider instance could use a `FilterVisitor to interpret each filter operation on the Filter object and translate those operations into SQL. The FilterVisitor may be needed to support Filter functionality not currently handled by the FilterAdapter and FilterDelegate reference implementation.
Examples
Interpreting a Filter to Create SQL
If the FilterAdapter encountered or "visited" a PropertyIsLike filter with its property assigned as title and its literal expression assigned as mission, the FilterDelegate could create the proper SQL syntax similar to title LIKE mission.
Interpreting a Filter to Create XQuery
If the
FilterAdapter
encountered an
OR filter, such as in Figure Parsing-Filters2 and the target language was XQuery, the FilterDelegate could yield an expression such as
ft:query(//inventory:book/@subject,'math') union ft:query(//inventory:book/@subject,'science').
FilterAdapter/Delegate Process for Figure Parsing-Filters2
-
FilterAdaptervisits theORfilter first. -
ORfilter visits its children in a loop. -
The first child in the loop that is encountered is the LHS
PropertyIsLike. -
The
FilterAdapterwill call theFilterDelegate`PropertyIsLike`method with the LHS property and literal. -
The LHS
PropertyIsLikedelegate method builds the XQuery syntax that makes sense for this particular underlying object store. In this case, the subject property is specific to this XML database, and the business logic maps the subject property to its index at//inventory:book/@subjectNote thatft:queryin this instance is a custom XQuery module for this specific XML database that does full text searches. -
The
FilterAdapterthen moves back to theORfilter, which visits its second child. -
The
FilterAdapterwill call theFilterDelegatePropertyIsLikemethod with the RHS property and literal. -
The RHS
PropertyIsLikedelegate method builds the XQuery syntax that makes sense for this particular underlying object store. In this case, the subject property is specific to this XML database, and the business logic maps the subject property to its index at//inventory:book/@subjectNote thatft:queryin this instance is a custom XQuery module for this specific XML database that does full text searches. -
The
FilterAdapterthen moves back to its `OR Filter which is now done with its children. -
It then collects the output of each child and sends the list of results to the
FilterDelegate ORmethod. -
The final result object will be returned from the
FilterAdapteradapt method.
FilterVisitor Process for Figure Parsing-Filters2
-
FilterVisitor visits the
ORfilter first. -
ORfilter visits its children in a loop. -
The first child in the loop that is encountered is the LHS
PropertyIsLike. -
The LHS
PropertyIsLikebuilds the XQuery syntax that makes sense for this particular underlying object store. In this case, the subject property is specific to this XML database, and the business logic maps the subject property to its index at//inventory:book/@subject. Note thatft:queryin this instance is a custom XQuery module for this specific XML database that does full text searches. -
The FilterVisitor then moves back to the
ORfilter, which visits its second child. -
The RHS
PropertyIsLikebuilds the XQuery syntax that makes sense for this particular underlying object store. In this case, the subject property is specific to this XML database, and the business logic maps the subject property to its index at//inventory:book/@subject. Note thatft:queryin this instance is a custom XQuery module for this specific XML database that does full text searches. -
The FilterVisitor then moves back to its
ORfilter, which is now done with its children. It then collects the output of each child and could potentially execute the following code to produce the above expression.
1
2
3
4
5
6
public visit( Or filter, Object data) {
...
/* the equivalent statement for the OR filter in this domain (XQuery) */
xQuery = childFilter1Output + " union " + childFilter2Output;
...
}
Filter Profile
Role of the OGC Filter
Both Queries and Subscriptions extend the OGC GeoAPI Filter interface.
The Filter Builder and Adapter do not fully implement the OGC Filter Specification. The filter support profile contains suggested filter to metacard type mappings. For example, even though a Source could support a PropertyIsGreaterThan filter on XML_TYPE, it would not likely be useful.
Catalog Filter Profile
Metacard Attribute To Type Mapping
The filter profile maps filters to metacard types. The following table displays the common metacard attributes with their respective types for reference.
| Metacard Attribute | Metacard Type |
|---|---|
ANY_DATE |
DATE_TYPE |
ANY_GEO |
GEO_TYPE |
ANY_TEXT |
STRING_TYPE |
CONTENT_TYPE |
STRING_TYPE |
CONTENT_TYPE_VERSION |
STRING_TYPE |
CREATED |
DATE_TYPE |
EFFECTIVE |
DATE_TYPE |
GEOGRAPHY |
GEO_TYPE |
ID |
STRING_TYPE |
METADATA |
XML_TYPE |
MODIFIED |
DATE_TYPE |
RESOURCE_SIZE |
STRING_TYPE |
RESOURCE_URI |
STRING_TYPE |
SOURCE_ID |
STRING_TYPE |
TARGET_NAMESPACE |
STRING_TYPE |
THUMBNAIL |
BINARY_TYPE |
TITLE |
STRING_TYPE |
Comparison Operators
Comparison operators compare the value associated with a property name with a given Literal value. Endpoints and sources should try to use metacard types other than the object type. The object type only supports backwards compatibility with java.net.URI. Endpoints that send other objects will not be supported by standard sources. The following table maps the metacard types to supported comparison operators.
| PropertyIs | Between | EqualTo | GreaterThan | GreaterThan | OrEqualTo | LessThan | LessThan | OrEqualTo | Like | NotEqualTo | Null |
|---|---|---|---|---|---|---|---|---|---|---|---|
BINARY_TYPE |
X |
||||||||||
BOOLEAN_TYPE |
X |
||||||||||
DATE_TYPE |
X |
X |
X |
X |
X |
X |
X |
X |
X |
X |
|
DOUBLE_TYPE |
X |
X |
X |
X |
X |
X |
X |
X |
X |
X |
|
FLOAT_TYPE |
X |
X |
X |
X |
X |
X |
X |
X |
X |
X
|
|
GEO_TYPE |
X |
||||||||||
INTEGER_TYPE |
X |
X |
X |
X |
X |
X |
X |
X |
X |
X |
|
LONG_TYPE |
X |
X |
X |
X |
X |
X |
X |
X |
X |
X |
|
OBJECT_TYPE |
X |
X |
X |
X |
X |
X |
X |
X |
X |
X |
|
SHORT_TYPE |
X |
X |
X |
X |
X |
X |
X |
X |
X |
X |
|
STRING_TYPE |
X |
X |
X |
X |
X |
X |
X |
X |
X |
X |
X |
XML_TYPE |
X |
X |
X |
The following table describes each comparison operator.
| Operator | Description |
|---|---|
PropertyIsBetween |
Lower ⇐ Property ⇐ Upper |
PropertyIsEqualTo |
Property == Literal |
PropertyIsGreaterThan |
Property > Literal |
PropertyIsGreaterThanOrEqualTo |
Property >= Literal |
PropertyIsLessThan |
Property < Literal |
PropertyIsLessThanOrEqualTo |
Property ⇐ Literal |
PropertyIsLike |
Property LIKE Literal Equivalent to SQL "like" |
PropertyIsNotEqualTo |
Property != Literal |
PropertyIsNull |
Property == null |
Logical Operators
Logical operators apply Boolean logic to one or more child filters.
| And | Not | Or | |
|---|---|---|---|
Supported Filters |
X |
X |
X |
Temporal Operators
Temporal operators compare a date associated with a property name to a given Literal date or date range. The following table displays the supported temporal operators.
| After | AnyInteracts | Before | Begins | BegunBy | During | EndedBy | Meets | MetBy | OverlappedBy | TContains | |
|---|---|---|---|---|---|---|---|---|---|---|---|
DATE_TYPE |
X |
X |
X |
The following table describes each temporal operator. Literal values can be either date instants or date periods.
| Operator | Description |
|---|---|
After |
Property > (Literal || Literal.end) |
Before |
Property < (Literal || Literal.start) |
During |
Literal.start < Property < Literal.end |
Spatial Operators
Spatial operators compare a geometry associated with a property name to a given Literal geometry. The following table displays the supported spatial operators.
BBox |
Beyond |
Contains |
Crosses |
Disjoint |
Equals |
DWithin |
Intersects |
Overlaps |
Touches |
Within |
GEO_TYPE |
X |
X |
X |
X |
X |
X |
X |
The following table describes each spatial operator. Geometries are usually represented as Well-Known Text (WKT).
| Operator | Description |
|---|---|
Beyond |
Property geometries beyond given distance of Literal geometry |
Contains |
Property geometry contains Literal geometry |
Crosses |
Property geometry crosses Literal geometry |
Disjoint |
Property geometry direct positions are not interior to Literal geometry |
DWithin |
Property geometry lies within distance to Literal geometry |
Intersects |
Property geometry intersects Literal geometry; opposite to the Disjoint operator |
Overlaps |
Property geometry interior somewhere overlaps Literal geometry interior |
Touches |
Property geometry touches but does not overlap Literal geometry |
Within |
Property geometry completely contains Literal geometry |
Commons-DDF Utilities
The
commons-ddf
bundle, located in
<DDF_HOME_SOURCE_DIRECTORY>/common/commons-ddf
, provides utilities and functionality commonly
used across other DDF components, such as the endpoints and providers.
Noteworthy Classes
FuzzyFunction
ddf.catalog.impl.filter.FuzzyFunction class is used to indicate that a PropertyIsLike
filter should interpret the search as a fuzzy query.
XPathHelper
ddf.util.XPathHelper provides convenience methods for executing XPath operations on XML. It also provides convenience methods for converting XML as a String from a org.w3c.dom.Document object and vice versa.
Working with Settings
DDF provides the ability to obtain DDF settings/properties. For a list of DDF settings, refer to the Catalog API and Global Settings in the Integrator’s Guide. The DdfConfigurationWatcher will provide an update of properties to watchers. For example, if the port number changes, the DDF_PORT property value will be propagated to the watcher(s) in the form of a map.
Property Values
To obtain the property values, complete the following procedure.
-
Import and implement the
ddf.catalog.util.DdfConfigurationWatcherinterface.
public class SettingsWatcher implements DdfConfigurationWatcher
-
Get properties map and search for the property.
1
2
3
4
5
6
7
8
9
10
public void ddfConfigurationUpdated( Map properties )
{
//Get property by name
Object value = properties.get( DdfConfigurationManager.DDF_HOME_DIR );
if ( value != null )
{
this.ddfHomeDir = value.toString();
logger.debug( "ddfHomeDir = " + this.ddfHomeDir );
}
}
-
Export the watcher class as a service in the OSGi Registry. The example below uses the Blueprint dependency injection framework to add this watcher to the OSGi Registry. The
ddf.catalog.DdfConfigurationManagerwill search forConfigurationWatcher(s) to send properties updates.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:cm="http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.0.0">
<!-- create the bean -->
<bean id="SettingsWatcher" class="ddf.catalog.SettingsWatcher">
<cm:managed-properties
persistent-id="ddf.catalog.SettingsWatcher"
update-strategy="container-managed" />
</bean>
<!-- export the bean in the service registry as a DdfConfigurationWatcher -->
<service ref="SettingsWatcher" interface="ddf.catalog.util.DdfConfigurationWatcher">
</service>
</blueprint>
-
Import the DDFpackages to the bundle’s manifest for run-time (in addition to any other required packages).
Import-Package: ddf.catalog, ddf.catalog.util, ddf.catalog.* -
Deploy the packaged service to DDF (refer to the Working with OSGi - Bundles section).
Extending Catalog Plugins
The Catalog Framework calls Catalog Plugins to process requests and responses as they enter and leave the Framework.
Existing Plugins
Pre-Ingest Plugin
Using
Pre-Ingest plugins are invoked before an ingest operation is sent to a Source. This is an opportunity to take any action on the ingest request, including but not limited to:
-
validation
-
logging
-
auditing
-
optimization
-
security filtering
Failure Behavior
In the event that this Catalog Plugin cannot operate but does not wish to fail the transaction, a PluginExecutionException will be thrown. For any other Exceptions, the Catalog will "fail safe" and the Operation will be cancelled. If processing is to be explicitly stopped, a StopProcessingException will be thrown.
Invocation
Pre-Ingest plugins are invoked serially, prioritized by descending OSGi service ranking. That is, the plugin with the highest service ranking will be executed first.
The output of a Pre-Ingest plugin is sent to the next Pre-Ingest plugin, until all have executed and the ingest operation is sent to the requested Source.
Metacard Groomer
The Metacard Groomer Pre-Ingest plugin makes modifications to CreateRequest and UpdateRequest metacards.
This plugin makes the following modifications when metacards are in a CreateRequest:
-
Overwrites the Metacard.ID field with a generated, unique, 32 character hexadecimal value
-
Overwrites the Metacard.CREATED date with a current time stamp
-
Overwrites the Metacard.MODIFIED date with a current time stamp
The plugin also makes the following modifications when metacards are in an UpdateRequest:
-
If no value is provided for Metacard.ID in the new metacard, it will be set using the UpdateRequest ID if applicable.
-
If no value is provided, sets the Metacard.CREATED date with the Metacard.MODIFIED date so that the Metacard.CREATEDdate is not null.
-
Overwrites the Metacard.MODIFIED date with a current time stamp
Installing and UnInstalling
This plugin can be installed and uninstalled using the normal processes described in the Configuring DDF section.
Configuring
No configuration is necessary for this plugin.
Using
Use this pre-ingest plugin as a convenience to apply basic rules for your metacards.
Known Issues
None
Post-Ingest Plugin
Using
Post-ingest plugins are invoked after data has been created, updated, or deleted in a Catalog Provider.
Failure Behavior
In the event that this Catalog Plugin cannot operate but does not wish to fail the transaction, a PluginExecutionException will be thrown.
Invocation
Because the event has already occurred and changes from one post-ingest plugin cannot affect others, all Post-Ingest plugins are invoked in parallel and no priority is enforced.
Pre-Query Plugin
Using
Pre-query plugins are invoked before a query operation is sent to any of the Sources. This is an opportunity to take any action on the query, including but not limited to:
-
validation
-
logging
-
auditing
-
optimization
-
security filtering
Failure Behavior
In the event that this Catalog Plugin cannot operate but does not wish to fail the transaction, a PluginExecutionException will be thrown. For any other Exceptions, the Catalog will "fail safe" and the Operation will be cancelled. If processing is to be explicitly stopped, a StopProcessingException will be thrown.
Invocation
Pre-query plugins are invoked serially, prioritized by descending OSGi service ranking. That is, the plugin with the highest service ranking will be executed first. The output of a pre-query plugin is sent to the next pre-query plugin, until all have executed and the query operation is sent to the requested Source.
Post-Query Plugin
Using
Post-query plugins are invoked after a query has been executed successfully, but before the response is returned to the endpoint. This is an opportunity to take any action on the query response, including but not limited to:
-
logging
-
auditing
-
security filtering/redaction
-
deduplication
Failure Behavior
In the event that this Catalog Plugin cannot operate but does not wish to fail the transaction, a PluginExecutionException will be thrown. For any other Exceptions, the Catalog will "fail safe" and the Operation will be cancelled. If processing is to be explicitly stopped, a StopProcessingException will be thrown.
Invocation
Post-query plugins are invoked serially, prioritized by descending OSGi service ranking. That is, the plugin with the highest service ranking will be executed first. The output of the first plugin is sent to the next plugin, until all have executed and the response is returned to the requesting endpoint.
Metacard Resource Size Plugin
This post-query plugin updates the resource size attribute of each metacard in the query results if there is a cached file for the product and it has a size greater than zero; otherwise, the resource size is unmodified and the original result is returned.
Installing and UnInstalling
This feature can be installed and uninstalled using the normal processes described in the Configuring DDF section.
Configuring
No configuration is necessary for this plugin.
Using
Use this post-query plugin as a convenience to return query results with accurate resource sizes for cached products.
Known Issues
None
Other Types of Plugins
Pre-Get Resource Plugin
Using
Pre-get resource plugins are invoked before a request to retrieve a resource is sent to a Source. This is an opportunity to take any action on the request, including but not limited to:
-
validation
-
logging
-
auditing
-
optimization
-
security filtering
Failure Behavior
In the event that this Catalog Plugin cannot operate but does not wish to fail the transaction, a PluginExecutionException will be thrown. For any other Exceptions, the Catalog will "fail safe" and the Operation will be cancelled. If processing is to be explicitly stopped, a StopProcessingException will be thrown.
Invocation
Pre-get resource plugins are invoked serially, prioritized by descending OSGi service ranking. That is, the plugin with the highest service ranking will be executed first.
The output of the first plugin is sent to the next plugin, until all have executed and the request is sent to the targeted Source.
Post-Get Resource Plugin
Using
Post-get resource plugins are invoked after a resource has been retrieved, but before it is returned to the endpoint. This is an opportunity to take any action on the response, including but not limited to:
-
logging
-
auditing
-
security filtering/redaction
Failure Behavior
In the event that this Catalog Plugin cannot operate but does not wish to fail the transaction, a PluginExecutionException will be thrown. For any other Exceptions, the Catalog will "fail safe" and the Operation will be cancelled. If processing is to be explicitly stopped, a StopProcessingException will be thrown.
Invocation
Post-get resource plugins are invoked serially, prioritized by descending OSGi service ranking. That is, the plugin with the highest service ranking will be executed first.
The output of the first plugin is sent to the next plugin, until all have executed and the response is returned to the requesting endpoint.
Pre-Subscription Plugin
Using
Pre-subscription plugins are invoked before a Subscription is activated by an Event Processor. This is an opportunity to take any action on the Subscription, including but not limited to:
-
validation
-
logging
-
auditing
-
optimization
-
security filtering
Failure Behavior
In the event that this Catalog Plugin cannot operate but does not wish to fail the transaction, a PluginExecutionException will be thrown. For any other Exceptions, the Catalog will "fail safe" and the Operation will be cancelled. If processing is to be explicitly stopped, a StopProcessingException will be thrown.
Invocation
Pre-subscription plugins are invoked serially, prioritized by descending OSGi service ranking. That is, the plugin with the highest service ranking will be executed first.
The output of a pre-subscription plugin is sent to the next pre-subscription plugin, until all have executed and the create Subscription operation is sent to the Event Processor.
Examples
DDF includes a pre-subscription plugin example in the SDK that illustrates how to modify a subscription’s filter. This example is located in the DDF trunk at sdk/sample-plugins/ddf/sdk/plugin/presubscription.
Pre-Delivery Plugin
Using
Pre-delivery plugins are invoked before a Delivery Method is invoked on a Subscription. This is an opportunity to take any action before notification, including but not limited to:
-
logging
-
auditing
-
security filtering/redaction
Failure Behavior
In the event that this Catalog Plugin cannot operate but does not wish to fail the transaction, a PluginExecutionException will be thrown. For any other Exceptions, the Catalog will "fail safe" and the Operation will be cancelled. If processing is to be explicitly stopped, a StopProcessingException will be thrown.
Invocation
Pre-delivery plugins are invoked serially, prioritized by descending OSGi service ranking. That is, the plugin with the highest service ranking will be executed first.
The output of a pre-delivery plugin is sent to the next pre-delivery plugin, until all have executed and the Delivery Method is invoked on the associated Subscription.
Developing a Catalog Plugin
Plugins extend the functionality of the Catalog Framework by performing actions at specified times during a transaction. Plugins can be Pre-Ingest, Post-Ingest, Pre-Query, Post-Query, Pre-Subscription, Pre-Delivery, Pre-Resource, or Post-Resource. By implementing these interfaces, actions can be performed at the desired time. Refer to Catalog Framework for more information on how these plugins fit in the ingest and query flows.
Create New Plugins
Implement Plugin Interface
The following types of plugins can be created:
| Plugin Type | Plugin Interface | Description | Example |
|---|---|---|---|
Pre-Ingest |
ddf.catalog.plugin.PreIngestPlugin |
Runs before the Create/Update/Delete method is sent to the CatalogProvider |
Metadata validation services |
Post-Ingest |
ddf.catalog.plugin.PostIngestPlugin |
Runs after the Create/Update/Delete method is sent to the CatalogProvider |
EventProcessor for processing and publishing event notifications to subscribers |
Pre-Query |
ddf.catalog.plugin.PreQueryPlugin |
Runs prior to the Query/Read method being sent to the Source |
An example is not included with DDF |
Post-Query |
ddf.catalog.plugin.PostQueryPlugin |
Runs after results have been retrieved from the query but before they are posted to the Endpoint |
An example is not included with DDF |
Pre-Subscription |
ddf.catalog.plugin.PreSubscription |
Runs prior to a Subscription being created or updated |
Modify a query prior to creating a subscription |
Pre-Delivery |
ddf.catalog.plugin.PreDeliveryPlugin |
Runs prior to the delivery of a Metacard when an event is posted |
Inspect a metacard prior to delivering it to the Event Consumer |
Pre-Resource |
ddf.catalog.plugin.PreResource |
Runs prior to a Resource being retrieved |
An example is not included with DDF |
Post-Resource |
ddf.catalog.plugin.PostResource |
Runs after a Resource is retrieved, but before it is sent to the Endpoint |
Verification of a resource prior to returning to a client |
Implement Plugins
The procedure for implementing any of the plugins follows a similar format:
-
Create a new class that implements the specified plugin interface.
-
Implement the required methods.
-
Create OSGi descriptor file to communicate with the OSGi registry (described in the OSGi Services section).
-
Import DDF packages.
-
Register plugin class as service to OSGi registry.
-
-
Deploy to DDF. (Refer to the Working with OSGi - Bundles section.)
|
Refer to the Javadoc for more information on all Requests and Responses in the
|
Pre-Ingest
-
Create a Java class that implements
PreIngestPlugin.
public class SamplePreIngestPlugin implements ddf.catalog.plugin.PreIngestPlugin -
Implement the required methods.
public CreateRequest process(CreateRequest input) throws PluginExecutionException;public UpdateRequest process(UpdateRequest input) throws PluginExecutionException;public DeleteRequest process(DeleteRequest input) throws PluginExecutionException; -
Import the DDF interface packages to the bundle manifest (in addition to any other required packages).
Import-Package: ddf.catalog,ddf.catalog.plugin -
Export the service to the OSGi registry.
Blueprint descriptor example<service ref="[[SamplePreIngestPlugin ]]"interface="ddf.catalog.plugin.PreIngestPlugin" />
Post-Ingest
-
Create a Java class that implements PostIngestPlugin.
public class SamplePostIngestPlugin implements ddf.catalog.plugin.PostIngestPlugin -
Implement the required methods.
public CreateResponse process(CreateResponse input) throws PluginExecutionException;public UpdateResponse process(UpdateResponse input) throws PluginExecutionException;public DeleteResponse process(DeleteResponse input) throws PluginExecutionException; -
Import the DDF interface packages to the bundle manifest (in addition to any other required packages).
Import-Package: ddf.catalog,ddf.catalog.plugin -
Export the service to the OSGi registry.
Blueprint descriptor example<service ref="[[SamplePostIngestPlugin ]]"interface="ddf.catalog.plugin.PostIngestPlugin" />
Pre-Query
-
Create a Java class that implements PreQueryPlugin.
public class SamplePreQueryPlugin implements ddf.catalog.plugin.PreQueryPlugin -
Implement the required method.
public QueryRequest process(QueryRequest input) throws PluginExecutionException, StopProcessingException; -
Import the DDF interface packages to the bundle manifest (in addition to any other required packages).
Import-Package: ddf.catalog,ddf.catalog.plugin -
Export the service to the OSGi registry.
<service ref=""interface="ddf.catalog.plugin.PreQueryPlugin" />
Post-Query
-
Create a Java class that implements
PostQueryPlugin.
public class SamplePostQueryPlugin implements ddf.catalog.plugin.PostQueryPlugin -
Implement the required method.
`public QueryResponse process(QueryResponse input) throws PluginExecutionException, StopProcessingException; -
Import the DDF interface packages to the bundle manifest (in addition to any other required packages).
Import-Package: ddf.catalog,ddf.catalog.plugin -
Export the service to the OSGi registry.
<service ref=""interface="ddf.catalog.plugin.PostQueryPlugin" />
Pre-Delivery
-
Create a Java class that implements PreDeliveryPlugin.
public class SamplePreDeliveryPlugin implements ddf.catalog.plugin.PreDeliveryPlugin -
Implement the required methods.
public Metacard processCreate(Metacard metacard) throws PluginExecutionException, StopProcessingException;public Update processUpdateMiss(Update update) throws PluginExecutionException, StopProcessingException;public Update processUpdateHit(Update update) throws PluginExecutionException, StopProcessingException;public Metacard processCreate(Metacard metacard) throws PluginExecutionException, StopProcessingException; -
Import the DDF interface packages to the bundle manifest (in addition to any other required packages).
Import-Package: ddf.catalog,ddf.catalog.plugin,ddf.catalog.operation,ddf.catalog.event -
Export the service to the OSGi registry.
Blueprint descriptor example
<service ref=""interface="ddf.catalog.plugin.PreDeliveryPlugin" />
Pre-Subscription
-
Create a Java class that implements PreSubscriptionPlugin.
`public class SamplePreSubscriptionPlugin implements ddf.catalog.plugin.PreSubscriptionPlugin -
Implement the required method.
public Subscription process(Subscription input) throws PluginExecutionException, StopProcessingException;
Pre-Resource
-
Create a Java class that implements
PreResourcePlugin.public class SamplePreResourcePlugin implements ddf.catalog.plugin.PreResourcePlugin -
Implement the required method.
public ResourceRequest process(ResourceRequest input) throws PluginExecutionException, StopProcessingException; -
Import the DDF interface packages to the bundle manifest (in addition to any other required packages).
Import-Package: ddf.catalog,ddf.catalog.plugin,ddf.catalog.operation -
Export the service to the OSGi registry. .Blueprint descriptor example
<service ref="[[SamplePreResourcePlugin]]" interface="ddf.catalog.plugin.PreResourcePlugin" />
Post-Resource
-
Create a Java class that implements
PostResourcePlugin.
public class SamplePostResourcePlugin implements ddf.catalog.plugin.PostResourcePlugin -
Implement the required method.
public ResourceResponse process(ResourceResponse input) throws PluginExecutionException, StopProcessingException; -
Import the DDF interface packages to the bundle manifest (in addition to any other required packages).
Import-Package: ddf.catalog,ddf.catalog.plugin,ddf.catalog.operation -
Export the service to the OSGi registry. .Blueprint descriptor example
<service ref="[[SamplePostResourcePlugin]]" interface="ddf.catalog.plugin.PostResourcePlugin" />
Extending Operations
The Catalog provides the capability to query, create, update, and delete metacards; retrieve resources; and retrieve information about the sources in the enterprise.
Each of these operations follow a request/response paradigm. The request is the input to the operation and contains all of the input parameters needed by the Catalog Framework’s operation to communicate with the Sources. The response is the output from the execution of the operation that is returned to the client, which contains all of the data returned by the sources. For each operation there is an associated request/response pair, e.g., the QueryRequest and QueryResponse pair for the Catalog Framework’s query operation.
All of the request and response objects are extensible in that they can contain additional key/value properties on each request/response. This allows additional capability to be added without changing the Catalog API, helping to maintain backwards compatibility. Refer to the Developer’s Guide for details about using this extensibility.
Extending Data and Metadata Basics
The catalog stores and translates Metadata which can be transformed into many data formats, shared, and queried. The primary form of this metadata is the metacard. A Metacard is a container for metadata. CatalogProviders accept Metacards as input for ingest, and Sources search for metadata and return matching Results that include Metacards.
Metacard
A single instance of metadata in the Catalog (an instance of a metacard type) which generally contains metadata providing a title for the product and describing a product’s geo-location, created and modified dates, owner or producer, security classification, etc. ==== Metacard Type
A metacard type indicates the attributes available for a particular metacard. It is a model used to define the attributes of a metacard, much like a schema.
Default Metacard Type and Attributes
Most metacards within the system are created using with the default metacard type. The default metacard type of the system can be programmatically retrieved by calling ddf.catalog.data.BasicTypes.BASIC_METACARD. The name of the default MetacardType can be retrieved from ddf.catalog.data.MetacardType.DEFAULT_METACARD_TYPE_NAME.
The default metacard type has the following required attributes. Though the following attributes are required on all metacard types, setting their values is optional except for ID.
Required Attributes
| ddf.catalog.data.Metacard Constant | Attribute Name | Attribute Format | Description |
|---|---|---|---|
CONTENT_TYPE |
metadata-content-type |
STRING |
Attribute name accessing for the metadata content type of a Metacard. |
CONTENT_TYPE_VERSION |
metadata-content-type-version |
STRING |
Attribute name for the version of the metadata content accessing type of a Metacard. |
CREATED |
created |
DATE |
Attribute name for accessing the date/time this Metacard was created. |
EFFECTIVE |
effective |
DATE |
Attribute name for accessing the date/time of the product represented by the Metacard. |
EXPIRATION |
expiration |
DATE |
Attribute name for accessing the date/time the Metacard is no longer valid and could be removed. |
GEOGRAPHY |
location |
GEOMETRY |
Attribute name for accessing the location for this Metacard. |
ID |
id |
STRING |
Attribute name for accessing the ID of the Metacard. |
METADATA |
metadata |
XML |
Attribute name for accessing the XML metadata for this Metacard. |
MODIFIED |
modified |
DATE |
Attribute name for accessing the date/time this Metacard was last modified. |
RESOURCE_SIZE |
resource-size |
STRING |
Attribute name for accessing the size in bytes of the product this Metacard represents. |
RESOURCE_URI |
resource-uri |
STRING |
Attribute name for accessing the URI reference to the product this Metacard represents. |
TARGET_NAMESPACE |
metadata-target-namespace |
STRING |
Attribute name for the target namespace of the accessing metadata content type of a Metacard. |
THUMBNAIL |
thumbnail |
BINARY |
Attribute name for accessing the thumbnail image of the product this Metacard represents. The thumbnail must be of MIME Type |
TITLE |
title |
STRING |
Attribute name for accessing the title of the Metacard. |
|
It is highly recommended when referencing a default attribute name to use the |
|
Every Source should at the very least return an ID attribute according to Catalog API. Other fields might or might not be applicable, but a unique ID must be returned by a Source. |
Extensible Metacards
Metacard extensibility is achieved by creating a new MetacardType that supports attributes in addition to the required attributes listed above.
Required attributes must be the base of all extensible metacard types.
|
Not all Catalog Providers support extensible metacards. Nevertheless, each Catalog Provider should at least have support for the default MetacardType; i.e., it should be able to store and query on the attributes and attribute formats specified by the default metacard type. Consult the documentation of the Catalog Provider in use for more information on its support of extensible metacards. |
Metacard Extensibility
Often, the BASIC_METACARD MetacardType does not provide all the functionality or attributes necessary for a specific task. For performance or convenience purposes, it may be necessary to create custom attributes even if others will not be aware of those attributes. One example could be if a user wanted to optimize a search for a date field that did not fit the definition of CREATED, MODIFIED, EXPIRATION, or EFFECTIVE. The user could create an additional java.util.Date attribute in order to query the attribute separately.
Metacard objects are extensible because they allow clients to store and retrieve standard and custom key/value Attributes from the Metacard. All Metacards must return a MetacardType object that includes an AttributeDescriptor for each Attribute, indicating it’s key and value type. AttributeType support is limited to those types defined by the Catalog.
New MetacardType implementations can be made by implementing the MetacardType interface.
Metacard Type Registry
|
The MetacardTypeRegistry is experimental. While this component has been tested and is functional, it may change as more information is gathered about what is needed and as it is used in more scenarios. |
The MetacardTypeRegistry allows DDF components, primarily CatalogProviders and Sources, to make available the MetacardTypes that they support. It maintains a list of all supported MetacardTypes in the CatalogFramework, so that other components such as Endpoints, Plugins, and Transformers can make use of those MetacardTypes. The MetacardType is essential for a component in the CatalogFramework to understand how it should interpret a metacard by knowing what attributes are available in that metacard.
For example, an endpoint receiving incoming metadata can perform a lookup in the MetacardTypeRegistry to find a corresponding MetacardType. The discovered MetacardType will then be used to help the endpoint populate a metacard based on the specified attributes in the MetacardType. By doing this, all the incoming metadata elements can then be available for processing, cataloging, and searching by the rest of the CatalogFramework.
MetacardTypes should be registered with the MetacardTypeRegistry. The MetacardTypeRegistry makes those MetacardTypes available to other DDF CatalogFramework components. Other components that need to know how to interpret metadata or metacards should look up the appropriate MetacardType from the registry. By having these MetacardTypes available to the CatalogFramework, these components can be aware of the custom attributes.
The MetacardTypeRegistry is accessible as an OSGi service. The following blueprint snippet shows how to inject that service into another component:
1
2
3
4
5
6
<bean id="sampleComponent" class="ddf.catalog.SampleComponent">
<argument ref="metacardTypeRegistry" />
</bean>
<!-- Access MetacardTypeRegistry -->
<reference id="metacardTypeRegistry" interface="ddf.catalog.data.MetacardTypeRegistry"/>
The reference to this service can then be used to register new MetacardTypes or to lookup existing ones.
Typically, new MetacardTypes will be registered by CatalogProviders or Sources indicating they know how to persist, index, and query attributes from that type. Typically, Endpoints or InputTransformers will use the lookup functionality to access a MetacardType based on a parameter in the incoming metadata. Once the appropriate MetacardType is discovered and obtained from the registry, the component will know how to translate incoming raw metadata into a DDF Metacard.
Attribute
A single field of a metacard, an instance of an attribute type. Attributes are typically indexed for searching by a Source or Catalog Provider.
Attribute Type
An attribute type indicates the attribute format of the value stored as an attribute. It is a model for an attribute.
Attribute Format
An enumeration of attribute formats are available in the catalog. Only these attribute formats may be used.
| AttributeFormat | Description |
|---|---|
BINARY |
Attributes of this attribute format must have a value that is a Java byte[] and AttributeType.getBinding() should return Class<Array>of byte. |
BOOLEAN |
Attributes of this attribute format must have a value that is a Java boolean. |
DATE |
Attributes of this attribute format must have a value that is a Java date. |
DOUBLE |
Attributes of this attribute format must have a value that is a Java double. |
FLOAT |
Attributes of this attribute format must have a value that is a Java float. |
GEOMETRY |
Attributes of this attribute format must have a value that is a WKT-formatted Java string. |
INTEGER |
Attributes of this attribute format must have a value that is a Java integer. |
LONG |
Attributes of this attribute format must have a value that is a Java long. |
OBJECT |
Attributes of this attribute format must have a value that implements the serializable interface. |
SHORT |
Attributes of this attribute format must have a value that is a Java short. |
STRING |
Attributes of this attribute format must have a value that is a Java string and treated as plain text. |
XML |
Attributes of this attribute format must have a value that is a XML-formatted Java string. |
Result
A single "hit" included in a query response.
A result object consists of the following:
-
a metacard
-
a relevance score if included
-
distance in meters if included
Creating Metacards
The quickest way to create a Metacard is to extend or construct the MetacardImpl object. MetacardImpl is the most commonly used and extended Metacard implementation in the system because it provides a convenient way for developers to retrieve and set Attribute`s without having to create a new `MetacardType (see below). MetacardImpl uses BASIC_METACARD as its MetacardType.
Limitations
A given developer does not have all the information necessary to programmatically interact with any arbitrary Source. Developers hoping to query custom fields from extensible Metacards of other Sources cannot easily accomplish that task with the current API. A developer cannot question a random Source for all its queryable fields. A developer only knows about the MetacardTypes which that individual developer has used or created previously.
The only exception to this limitation is the Metacard.ID field, which is required in every Metacard that is stored in a Source. A developer can always request Metacards from a Source for which that developer has the Metacard.ID value. The developer could also perform a wildcard search on the Metacard.ID field if the Source allows.
Processing Metacards
As Metacard objects are created, updated, and read throughout the Catalog, care should be taken by all Catalog Components to interrogate the MetacardType to ensure that additional Attributes are processed accordingly.
Basic Types
The Catalog includes definitions of several Basic Types all found in the ddf.catalog.data.BasicTypes class.
| Name | Type | Description |
|---|---|---|
BASIC_METACARD |
MetacardType |
representing all required Metacard Attributes |
BINARY_TYPE |
AttributeType |
A Constant for an AttributeType with AttributeType.AttributeFormat.BINARY. |
BOOLEAN_TYPE |
AttributeType |
A Constant for an AttributeType with AttributeType.AttributeFormat.BOOLEAN. |
DATE_TYPE |
AttributeType |
A Constant for an AttributeType with AttributeType.AttributeFormat.DATE . |
DOUBLE_TYPE |
AttributeType |
A Constant for an AttributeType with AttributeType.AttributeFormat.DOUBLE. |
FLOAT_TYPE |
AttributeType |
A Constant for an AttributeType with AttributeType.AttributeFormat.FLOAT. |
GEO_TYPE |
AttributeType |
A Constant for an AttributeType with AttributeType.AttributeFormat.GEOMETRY. |
INTEGER_TYPE |
AttributeType |
A Constant for an AttributeType with AttributeType.AttributeFormat.INTEGER. |
LONG_TYPE |
AttributeType |
A Constant for an AttributeType with AttributeType.AttributeFormat.LONG . |
OBJECT_TYPE |
AttributeType |
A Constant for an AttributeType with AttributeType.AttributeFormat.OBJECT. |
SHORT_TYPE |
AttributeType |
A Constant for an AttributeType with AttributeType.AttributeFormat.SHORT. |
STRING_TYPE |
AttributeType |
A Constant for an AttributeType with AttributeType.AttributeFormat.STRING. |
XML_TYPE |
AttributeType |
A Constant for an AttributeType with AttributeType.AttributeFormat.XML. |
Extending Catalog Framework
This section describes the core components of the Catalog app and Catalog Framework. The Catalog Framework wires all Catalog components together.
It is responsible for routing Catalog requests and responses to the appropriate target.
Endpoints send Catalog requests to the Catalog Framework. The Catalog Framework then invokes Catalog Plugins, Transformers, and Resource Components as needed before sending requests to the intended destination, such as one or more Sources.
The Catalog Framework functions as the routing mechanisms between all catalog components. It decouples clients from service implementations and provides integration points for Catalog Plugins and convenience methods for Endpoint developers.
Included Catalog Frameworks
Catalog API
The Catalog API is an OSGi bundle (catalog-core-api) that contains the Java interfaces for the Catalog components and implementation classes for the Catalog Framework, Operations, and Data components.
Standard Catalog Framework
The Standard Catalog Framework provides the reference implementation of a Catalog Framework that implements all requirements of the DDF Catalog API. CatalogFrameworkImpl is the implementation of the DDF Standard Catalog Framework.
Installing and Uninstalling
The Standard Catalog Framework is bundled as the catalog-core-standardframework feature and can be installed and uninstalled using the normal processes described in Configuration.
When this feature is installed, the Catalog Fanout Framework App feature catalog-core-fanoutframework should be uninstalled, as both catalog frameworks should not be installed simultaneously.
Configuring
Configurable Properties
Catalog Standard Framework
| Property | Type | Description | Default Value | Required |
|---|---|---|---|---|
fanoutEnabled |
Boolean |
When enabled the Framework acts as a proxy, federating requests to all available sources. All requests are executed as federated queries and resource retrievals, allowing the framework to be the sole component exposing the functionality of all of its Federated Sources. |
false |
yes |
productCacheDirectory |
String |
Directory where retrieved products will be cached for faster, future retrieval. If a directory path is specified with directories that do not exist, Catalog Framework will attempt to create those directories. Out of the box (without configuration), the product cache directory is It is recommended to enter an absolute directory path such as |
|
no |
cacheEnabled |
Boolean |
Check to enable caching of retrieved products to provide faster retrieval for subsequent requests for the same product. |
false |
no |
delayBetweenRetryAttempts |
Integer |
The time to wait (in seconds) between each attempt to retry retrieving a product from the Source. |
10 |
no |
maxRetryAttempts |
Integer |
The maximum number of attempts to try and retrieve a product from the Source. |
3 |
no |
cachingMonitorPeriod |
Integer |
The number of seconds allowed for no data to be read from the product data before determining that the network connection to the Source where the product is located is considered to be down. |
5 |
no |
cacheWhenCanceled |
Boolean |
Check to enable caching of retrieved products even if client cancels the download. |
false |
no |
| Managed Service PID | ddf.catalog.CatalogFrameworkImpl |
|---|---|
Managed Service Factory PID |
N/A |
Using
The Standard Catalog Framework is the core class of DDF. It provides the methods for query, create, update, delete, and resource retrieval (QCRUD) operations on the Sources. By contrast, the Fanout Catalog Framework only allows for query and resource retrieval operations, no catalog modifications, and all queries are enterprise-wide.
Use this framework if:
-
access to a catalog provider to create, update, and delete catalog entries are required
-
queries to specific sites are required
-
queries to only the local provider are required
It is possible to have only remote Sources configured with no local CatalogProvider configured and be able to execute queries to specific remote sources by specifying the site name(s) in the query request.
The Standard Catalog Framework also maintains a list of ResourceReaders for resource retrieval operations. A resource reader is matched to the scheme (i.e., protocol, such as file://) in the URI of the resource specified in the request to be retrieved.
Site information about the catalog provider and/or any federated source(s) can be retrieved using the Standard Catalog Framework. Site information includes the source’s name, version, availability, and the list of unique content types currently stored in the source (e.g., NITF). If no local catalog provider is configured, the site information returned includes site info for the catalog framework with no content types included.
Implementation Details
Exported Services
| Registered Interface | Service Property | Value |
|---|---|---|
|
shortname |
sorted |
|
event.topics |
ddf/catalog/event/CREATED, ddf/catalog/event/UPDATED, ddf/catalog/event/DELETED |
|
||
|
|
|
|
||
|
Imported Services
| Registered Interface | Availability | Multiple |
|---|---|---|
|
optional |
true |
|
optional |
true |
|
optional |
true |
|
optional |
true |
|
optional |
true |
|
optional |
true |
|
optional |
true |
|
optional |
true |
|
optional |
true |
|
optional |
true |
|
optional |
true |
|
optional |
true |
ddf.catalog.source.ConnectedSource` |
optional |
true |
|
optional |
true |
|
|
false |
|
|
false |
Known Issues
None
Catalog Fanout Framework
The Fanout Catalog Framework (fanout-catalogframework bundle) provides an implementation of the Catalog Framework that acts as a proxy, federating requests to all available sources. All requests are executed as federated queries and resource retrievals, allowing the fanout site to be the sole site exposing the functionality of all of its Federated Sources. The Fanout Catalog Framework is the implementation of the Fanout Catalog Framework.
The Fanout Catalog Framework provides the capability to configure DDF to be a fanout proxy to other federated sources within the enterprise. The Fanout Catalog Framework has no catalog provider configured for it, so it does not allow catalog modifications to take place. Therefore, create, update, and delete operations are not supported.
In addition, the Fanout Catalog Framework provides the following benefits:
-
Backwards compatibility (e.g., federating with older versions) with existing older versions of DDF
-
A single node being exposed from an enterprise, thus hiding the enterprise from an external client
-
Ensures all queries and resource retrievals are federated
Installing and Uninstalling
The Fanout Catalog Framework is bundled as the catalog-core-fanoutframework feature and can be installed and uninstalled using the normal processes described in Configuration.
|
When this feature is installed, the Standard Catalog Framework feature |
Configuring
The Fanout Catalog Framework can be configured using the normal processes described in Configuring DDF.
The configurable properties for the Fanout Catalog Framework are accessed from the Catalog Fanout Framework configuration in the Web Console.
Configurable Properties
| Title | Property | Type | Description | Default Value | Required |
|---|---|---|---|---|---|
Default Timeout (in milliseconds) |
defaultTimeout |
Integer |
The maximum amount of time to wait for a response from the Sources. |
60000 |
yes |
Product Cache Directory |
productCacheDirectory |
String |
Directory where retrieved products will be cached for faster, future retrieval. If a directory path is specified with directories that do not exist, Catalog Framework will attempt to create those directories. Out of the box (without configuration), the product cache directory is It is recommended to enter an absolute directory path, such as |
(empty) |
no |
Enable Product Caching |
cacheEnabled |
Boolean |
Check to enable caching of retrieved products to provide faster retrieval for subsequent requests for the same product. |
false |
no |
Delay (in seconds) between product retrieval retry attempts |
delayBetweenRetryAttempts |
Integer |
The time to wait (in seconds) between attempting to retry retrieving a product. |
10 |
no |
Max product retrieval retry attempts |
maxRetryAttempts |
Integer |
The maximum number of attempts to retry retrieving a product. |
3 |
no |
Caching Monitor Period |
cachingMonitorPeriod |
Integer |
How many seconds to wait and not receive product data before retrying to retrieve a product. |
5 |
no |
Always Cache Product |
cacheWhenCanceled |
Boolean |
Check to enable caching of retrieved products, even if client cancels the download. |
false |
no |
| Managed Service PID | ddf.catalog.impl.service.fanout.FanoutCatalogFramework |
|---|---|
Managed Service Factory PID |
N/A |
Using
The Fanout Catalog Framework is a core class of DDF when configured as a fanout proxy. It provides the methods for query and resource retrieval operations on the Sources, where all operations are enterprise-wide operations. By contrast, the Standard Catalog Framework supports create/update/delete operations of metacards, in addition to the query and resource retrieval operations.
Use the Fanout Catalog Framework if:
-
exposing a single node for enterprise access and hiding the details of the enterprise, such as federate source’s names, is desired
-
access to individual federated sources is not required
-
access to a catalog provider to create, update, and delete metacards is not required
The Fanout Catalog Framework also maintains a list of ResourceReaders for resource retrieval operations. A resource reader is matched to the scheme (i.e., protocol, such as file://) in the URI of the resource specified in the request to be retrieved.
Site information about the fanout configuration can be retrieved using the Fanout Catalog Framework. Site information includes the source’s name, version, availability, and the list of unique content types currently stored in the source (e.g., NITF). Details of the individual federated sources is not included, only the fanout catalog framework.
Implementation Details
Exported Services
| Registered Interface | Service Property | Value |
|---|---|---|
|
shortname |
sorted |
|
event.topics |
|
ddf.catalog.CatalogFramework |
||
|
|
|
|
|
|
|
Imported Services
| Registered Interface | Availability | Multiple |
|---|---|---|
|
false |
|
|
optional |
true |
|
optional |
true |
|
optional |
true |
|
optional |
true |
|
optional |
true |
|
optional |
true |
|
optional |
true |
|
optional |
true |
|
optional |
true |
|
optional |
true |
|
optional |
true |
|
optional |
true |
|
optional |
true |
|
|
false |
Known Issues
None
Catalog Framework Camel Component
The catalog framework camel component supports creating, updating, and deleting metacards using the Catalog Framework from a Camel route.
URI Format
catalog:framework
Message Headers
Catalog Framework Producer
| Header | Description |
|---|---|
operation |
the operation to perform using the catalog framework (possible values are CREATE | UPDATE | DELETE) |
Sending Messages to Catalog Framework Endpoint
Catalog Framework Producer
In Producer mode, the component provides the ability to provide different inputs and have the Catalog framework perform different operations based upon the header values.
For the CREATE and UPDATE operation, the message body can contain a list of metacards or a single metacard object.
For the DELETE operation, the message body can contain a list of strings or a single string object. The string objects represent the IDs of metacards to be deleted. The exchange’s "in" message will be set with the affected metacards. In the case of a CREATE, it will be updated with the created metacards. In the case of the UPDATE, it will be updated with the updated metacards and with the DELETE it will contain the deleted metacards.
| Header | Message Body (Input) | Exchange Modification (Output) |
|---|---|---|
operation = CREATE |
List<Metacard> or Metacard |
exchange.getIn().getBody() updated with List of Metacards created |
operation = UPDATE |
List<Metacard> or Metacard |
exchange.getIn().getBody() updated with List of Metacards updated |
operation = DELETE |
List<String> or String (representing metacard IDs) |
exchange.getIn().getBody() updated with List of Metacards deleted |
Samples
This example demonstrates:
-
Reading in some sample data from the file system.
-
Using a Java bean to convert the data into a metacard.
-
Setting a header value on the Exchange.
-
Sending the Metacard to the Catalog Framework component for ingestion.
1
2
3
4
5
6
7
8
<route>
<from uri="file:data/sampleData?noop=true“/>
<bean ref="sampleDataToMetacardConverter" method="covertToMetacard"/>\
<setHeader headerName="operation">
<constant>CREATE</constant>
</setHeader>
<to uri="catalog:framework"/>
</route>
Working with the Catalog Framework
Catalog Framework Reference
The Catalog Framework can be requested from the OSGi registry. See OSGi Services for more details on Blueprint injection.
<reference id="catalogFramework" interface="ddf.catalog.CatalogFramework" />
Methods
Create, Update, and Delete
Create, Update, and Delete (CUD) methods add, change, or remove stored metadata in the local Catalog Provider.
1
2
3
public CreateResponse create(CreateRequest createRequest) throws IngestException, SourceUnavailableException;
public UpdateResponse update(UpdateRequest updateRequest) throws IngestException, SourceUnavailableException;
public DeleteResponse delete(DeleteRequest deleteRequest) throws IngestException, SourceUnavailableException;
CUD operations process `PreIngestPlugin`s before execution and `PostIngestPlugin`s after execution.
Query
Query methods search metadata from available Sources based on the QueryRequest properties and Federation Strategy. Sources could include Catalog Provider, Connected Sources, and Federated Sources.
1
2
public QueryResponse query(QueryRequest query) throws UnsupportedQueryException,SourceUnavailableException, FederationException;
public QueryResponse query(QueryRequest queryRequest, FederationStrategy strategy) throws SourceUnavailableException, UnsupportedQueryException, FederationException;
Query requests process `PreQueryPlugin`s before execution and `PostQueryPlugin`s after execution.
Resources
Resource methods retrieve products from Sources.
1
2
3
public ResourceResponse getEnterpriseResource(ResourceRequest request) throwsIOException, ResourceNotFoundException, ResourceNotSupportedException;
public ResourceResponse getLocalResource(ResourceRequest request) throws IOException, ResourceNotFoundException, ResourceNotSupportedException;
public ResourceResponse getResource(ResourceRequest request, String resourceSiteName) throws IOException, ResourceNotFoundException, ResourceNotSupportedException;
Resource requests process `PreResourcePlugin`s before execution and `PostResourcePlugin`s after execution.
Sources
Source methods can get a list of Source identifiers or request descriptions about Sources.
1
2
public Set<String> getSourceIds();
public SourceInfoResponse getSourceInfo(SourceInfoRequest sourceInfoRequest) throws SourceUnavailableException;
Transforms
Transform methods provide convenience methods for using Metacard Transformers and Query Response Transformers.
1
2
3
4
5
// Metacard Transformer
public BinaryContent transform(Metacard metacard, String transformerId, Map<String,Serializable> requestProperties) throws CatalogTransformerException;
// Query Response Transformer
public BinaryContent transform(SourceResponse response, String transformerId, Map<String, Serializable> requestProperties) throws CatalogTransformerException;
Developing Complementary Frameworks
DDF and the underlying OSGi technology can serve as a robust infrastructure for developing frameworks that complement the DDF Catalog.
Recommendations for Framework Development
-
Provide extensibility similar to that of the DDF Catalog.
-
Provide a stable API with interfaces and simple implementations (refer to
http://www.ibm.com/developerworks/websphere/techjournal/1007_charters/1007_charters.html).
-
-
Make use of the DDF Catalog wherever possible to store, search, and transform information.
-
Utilize OSGi standards wherever possible.
-
ConfigurationAdmin
-
MetaType
-
-
Utilize the sub-frameworks available in DDF.
-
Karaf
-
CXF
-
PAX Web and Jetty
-
Developing Console Commands
Console Commands
DDF supports development of custom console commands. For more information, see the Karaf website on Extending the Console (http://karaf.apache.org/manual/latest-2.2.x/developers-guide/extending-console.html).
Custom DDF Console Commands
DDF includes custom commands for working with the Catalog, as described in the Console Commands section.
Extending Sources
Catalog sources are used to connect Catalog components to data sources, local and remote. Sources act as proxies to the actual external data sources, e.g., a RDBMS database or a NoSQL database.
Existing Source Types
Catalog Provider
A Catalog provider provides an implementation of a searchable and writable catalog. All sources, including federated source and connected source, support queries, but a Catalog provider also allows metacards to be created, updated, and deleted.
A Catalog provider typically connects to an external application or a storage system (e.g., a database), acting as a proxy for all catalog operations. ===== Using The Standard Catalog Framework uses only one Catalog provider, determined by the OSGi Framework as the service reference with the highest service ranking. In the case of a tie, the service with the lowest service ID (first created) is used.
The Catalog Fanout Framework App does not use a Catalog provider and will fail any create/update/delete operations even if there are active Catalog providers configured.
The Catalog reference implementation comes with a Solr Catalog Provider out of the box.
Remote Sources
Remote sources are read-only data sources that support query operations but cannot be used to create, update, or delete metacards.
|
Remote sources currently extend the ResourceReader interface. However, a RemoteSource is not treated as a ResourceReader. The getSupportedSchemes() method should never be called on a RemoteSource, thus the suggested implementation for a RemoteSource is to return an empty set. TheretrieveResource( … ) and getOptions( … ) methods will be called and MUST be properly implemented by a RemoteSource. |
Connected Source
A connected source is a remote source that is included in all local and federated queries but remains hidden from external clients. A connected source’s identifier is removed in all query results by replacing it with DDF’s source identifier. The Catalog Framework does not reveal a connected source as a separate source when returning source information responses.
image::query-flow.png, 500[]
Federated Source
A federated source is a remote source that can be included in federated queries by request or as part of an enterprise query. Federated sources support query and site information operations only. Catalog modification operations, such as create, update, and delete, are not allowed. Federated sources also expose an event service, which allows the Catalog Framework to subscribe to even notifications when metacards are created, updated, and deleted. DDF Catalog instances can also be federated to each other. Therefore, a DDF Catalog can also act as a federated source to another DDF Catalog.
OpenSearch Source
The OpenSearch source provides a Federated Source that has the capability to do OpenSearch (http://www.opensearch.org/Home) queries for metadata from Content Discovery and Retrieval (CDR) Search V1.1 compliant sources. The OpenSearch source does not provide a Connected Source interface.
Installing and Uninstalling
The OpenSearch source can be installed and uninstalled using the normal processes described in the Configuring DDF section.
Configuring
This component can be configured using the normal processes described in the Configuring DDF section. The configurable properties for the OpenSearch source are accessed from the Catalog OpenSearch Federated Source Configuration in the Web Console.
Configuring the OpenSearch Source
Configurable Properties
| Title | Property | Type | Description | Default Value | Required |
|---|---|---|---|---|---|
Source Name |
shortname |
String |
DDF-OS |
Yes |
|
OpenSearch service URL |
endpointUrl |
String |
The OpenSearch endpoint URL, e.g., DDF’s OpenSearch endpoint (http://0.0.0.0:8181/services/catalog/query?q={searchTerms}…) |
Yes |
|
Username |
username |
String |
Username to use with HTTP Basic Authentication. This auth info will overwrite any federated auth info. Only set this if the OpenSearch endpoint requires basic authentication. |
No |
|
Password |
password |
String |
Password to use with HTTP Basic Authentication. This auth info will overwrite any federated auth info. Only set this if the OpenSearch endpoint requires basic authentication. |
No |
|
Always perform local query |
localQueryOnly |
Boolean |
Always performs a local query by setting src=local OpenSearch parameter in endpoint URL. This must be set if federating to another DDF. |
false |
Yes |
Convert to BBox |
shouldConvertToBBox |
Boolean |
Converts Polygon and Point-Radius searches to a Bounding Box for compatibility with legacy interfaces. Generated bounding box is a very rough representation of the input geometry |
true |
Yes |
Using
Use the OpenSearch source if querying a CDR-compliant search service is desired.
Source Details
Default Security Settings (applicable to all OpenSearch Sources)
These settings are used to provide default security settings for the Title, Description, and Security elements in a record. The purpose of these defaults is that many providers fail to deliver a classification and Owner/Producer with the metadata returned. These default settings are used if a metadata record is returned without security settings. This feature can be turned on or off.
. Open the Web Console.
.. http://localhost:8181/system/console
.. Username/Password: admin/admin
. Click on the Configuration tab.
. Find Catalog Security Defaults
. Select whether or not to apply these defaults by checking or unchecking the box marked "Apply Default Security Settings."
. If the applied defaults are selected, change the settings in the console to the default metadata security.
.. These settings can also be changed by editing the file <INSTALL_DIRECTORY>/etc/ddf/ddf.DefaultSiteSecurity.cfg
. Click Save at the bottom of the configuration window (or save the file).
|
Query Format
OpenSearch Parameter to DDF Query Mapping
| OpenSearch/CDR Parameter | DDF Data Location |
|---|---|
q={searchTerms} |
Pulled verbatim from DDF query. |
src={fs:routeTo?} |
Unused |
mr={fs:maxResults?} |
Pulled verbatim from DDF query. |
count={count?} |
Pulled verbatim from DDF query. |
mt={fs:maxTimeout?} |
Pulled verbatim from DDF query. |
dn={idn:userDN?} |
DDF Subject |
lat={geo:lat?} |
Pulled verbatim from DDF query. |
lon={geo:lon?} |
Pulled verbatim from DDF query. |
radius={geo:radius?} |
Pulled verbatim from DDF query. |
bbox={geo:box?} |
Converted from Point-Radius DDF query. |
polygon={geo:polygon?} |
Pulled verbatim from DDF query. |
dtstart={time:start?} |
Pulled verbatim from DDF query. |
dtend={time:end?} |
Pulled verbatim from DDF query. |
dateName={cat:dateName?} |
Unused |
filter={fsa:filter?} |
Unused |
sort={fsa:sort?} |
Translated from DDF query. Format: "relevance" or "date" Supports "asc" and "desc" using colon as delimiter. |
Implementation Details
Exported Services
| Registered Interface | Service Property | Value |
|---|---|---|
ddf.catalog.source.FederatedSource |
Imported Services
| Registered Interface | Availability | Multiple | Filter |
|---|---|---|---|
ddf.catalog.transform.InputTransformer |
required |
false |
(&(mime-type=text/xml)(id=xml)) |
Known Issues
The OpenSearch source does not provide a Connected Source interface.
Developing a Source
Sources are components that enable DDF to talk to back-end services. They let DDF perform query and ingest operations on catalog stores and query operations on federated sources. Sources reside in the Sources area of the DDF Overview.
Creating a New Source
Implement a Source Interface
There are three types of sources that can be created. All of these types of sources can perform a query operation. Operating on queries is the foundation for all sources. All of these sources must also be able to return their availability and the list of content types currently stored in their back-end data stores.
-
Catalog Provider -
ddf.catalog.source.CatalogProvider
Used to communicate with back-end storage. Allows for Query and Create/Update/Delete operations. -
Federated Source -
ddf.catalog.source.FederatedSource
Used to communicate with remote systems. Only allows query operations. -
Connected Source -
ddf.catalog.source.ConnectedSource
Similar to a Federated Source with the following exceptions:-
Queried on all local queries
-
SiteName is hidden (masked with the DDF sourceId) in query results
-
SiteService does not show this Source’s information separate from DDF’s.
-
The procedure for implementing any of the source types follows a similar format:
. Create a new class that implements the specified Source interface and ConfiguredService.
. Implement the required methods.
. Create an OSGi descriptor file to communicate with the OSGi registry. (Refer to the OSGi Services section.)
.. Import DDF packages.
.. Register source class as service to the OSGi registry.
. Deploy to DDF. (Refer to the Working with OSGi - Bundles section.)
|
The |
Catalog Provider
-
Create a Java class that implements CatalogProvider.
public class TestCatalogProvider implements ddf.catalog.source.CatalogProvider -
Implement the required methods from the
ddf.catalog.source.CatalogProviderinterface.
public CreateResponse create(CreateRequest createRequest) throws IngestException;public UpdateResponset update(UpdateRequest updateRequest) throws IngestException;public DeleteResponse delete(DeleteRequest deleteRequest) throws IngestException; -
Import the DDF interface packages to the bundle manifest (in addition to any other required packages).
Import-Package: ddf.catalog, ddf.catalog.source -
Export the service to the OSGi registry.
<service ref="[[TestCatalogProvider]]" interface="ddf.catalog.source.CatalogProvider" />
The DDF Integrator’s Guide provides details on the following Catalog Providers that come with DDF out of the box (refer to the Dummy Catalog Provider).
|
A code example of a Catalog Provider delivered with DDF is the Catalog Solr Embedded Provider. |
Federated Source
-
Create a Java class that implements
FederatedSourceandConfiguredService.
public class TestFederatedSource implements ddf.catalog.source.FederatedSource, ddf.catalog.service.ConfiguredService -
Implement the required methods of the
ddf.catalog.source.FederatedSourceandddf.catalog.service.ConfiguredServiceinterfaces. -
Import the DDF interface packages to the bundle manifest (in addition to any other required packages).
Import-Package: ddf.catalog, ddf.catalog.source -
Export the service to the OSGi registry.
<service ref="[[TestFederatedSource]]" interface="ddf.catalog.source.FederatedSource" />
The DDF Integrator’s Guide provides details on the following Federated Sources that come with DDF out of the box (refer to OpenSearch Source).
|
A code example of a Federated Source delivered with DDF can be found in |
Connected Source
-
Create a Java class that implements
ConnectedSourceandConfiguredService.
public class TestConnectedSource implements ddf.catalog.source.ConnectedSource, ddf.catalog.service.ConfiguredService -
Implement the required methods of the
ddf.catalog.source.ConnectedSourceandddf.catalog.service.ConfiguredServiceinterfaces. -
Import the DDF interface packages to the bundle manifest (in addition to any other required packages).
Import-Package: ddf.catalog, ddf.catalog.source -
Export the service to the OSGi registry.
1
<service ref="[[TestConnectedSource]]" interface="ddf.catalog.source.ConnectedSource" />
|
In some Providers that are created, there is a need to make Web Service calls through JAXB clients. It is best NOT to create your JAXB client as a global variable. We have seen intermittent failures with the creation of Providers and federated sources when clients are created in this manner. Create your JAXB clients every single time within the methods that require it in order to avoid this issue. |
Exception Handling
In general, sources should only send information back related to the call, not implementation details.
Examples
-
"Site XYZ not found" message rather than the full stack trace with the original site not found exception.
-
The caller issues a malformed search request. Return an error describing the right form, or specifically what was not recognized in the request. Do not return the exception and stack trace where the parsing broke.
-
The caller leaves something out. Do not return the null pointer exception with a stack trace, rather return a generic exception with the message "xyz was missing."
Additional Information
-
Three Rules for Effective Exception Handling (http://today.java.net/pub/a/today/2003/12/04/exceptions.html)
Developing a Filter Delegate
Filter Delegates help reduce the complexity of parsing OGC Filters. The reference Filter Adapter implementation contains the necessary boilerplate visitor code and input normalization to handle commonly supported OGC Filters.
Creating a New Filter Delegate
A Filter Delegate contains the logic that converts normalized filter input into a form that the targeted data source can handle. Delegate methods will be called in a depth first order as the Filter Adapter visits filter nodes.
Implementing the Filter Delegate
-
Create a Java class extending FilterDelegate.
public class ExampleDelegate extends ddf.catalog.filter.FilterDelegate<ExampleReturnObjectType> { -
FilterDelegate will throw an appropriate exception for all methods not implemented. Refer to the DDF JavaDoc for more details about what is expected of each FilterDelegate method.
|
A code example of a Filter Delegate can be found in ddf.catalog.filter.proxy.adapter.test of the filter-proxy bundle. |
Throwing Exceptions
Filter delegate methods can throw UnsupportedOperationException run-time exceptions. The GeotoolsFilterAdapterImpl will catch and re-throw these exceptions as UnsupportedQueryExceptions.
Using the Filter Adapter
The FilterAdapter can be requested from the OSGi registry. (Refer to Working with OSGi for more details on Blueprint injection.)
<reference id="filterAdapter" interface="ddf.catalog.filter.FilterAdapter" />
The Query in a QueryRequest implements the Filter interface. The Query can be passed to a FilterAdapter and FilterDelegate to process the Filter.
1
2
3
4
5
6
7
8
9
10
11
@Override
public ddf.catalog.operation.QueryResponse query(ddf.catalog.operation.QueryRequest queryRequest)
throws ddf.catalog.source.UnsupportedQueryException {
ddf.catalog.operation.Query query = queryRequest.getQuery();
ddf.catalog.filter.FilterDelegate<ExampleReturnObjectType> delegate = new ExampleDelegate();
// ddf.catalog.filter.FilterAdapter adapter injected via Blueprint
ExampleReturnObjectType result = adapter.adapt(query, delegate);
}
Import the DDF Catalog API Filter package and the reference implementation package of the Filter Adapter in the bundle manifest (in addition to any other required packages).
Import-Package: ddf.catalog, ddf.catalog.filter, ddf.catalog.source
===== Filter Support
Not all OGC Filters are exposed at this time. If demand for further OGC Filter functionality is requested, it can be added to the Filter Adapter and Delegate so sources can support more complex filters. The following OGC Filter types are currently available:
| Logical |
|---|
And |
Or |
Not |
Include |
Exclude |
| Property Comparison |
|---|
PropertyIsBetween |
PropertyIsEqualTo |
PropertyIsGreaterThan |
PropertyIsGreaterThanOrEqualTo |
PropertyIsLessThan |
PropertyIsLessThanOrEqualTo |
PropertyIsLike |
PropertyIsNotEqualTo |
PropertyIsNull |
| Spatial | Definition |
|---|---|
Beyond |
True if the geometry being tested is beyond the stated distance of the geometry provided. |
Contains |
True if the second geometry is wholly inside the first geometry. |
Crosses |
True if the intersection of the two geometries results in a value whose dimension is less than the geometries and the maximum dimension of the intersection value includes points interior to both the geometries, and the intersection value is not equal to either of the geometries. |
Disjoint |
True if the two geometries do not touch or intersect. |
DWithin |
True if the geometry being tested is within the stated distance of the geometry provided. |
Intersects |
True if the two geometries intersect. This is a convenience method as you could always ask for Not Disjoint(A,B) to get the same result. |
Overlaps |
True if the intersection of the geometries results in a value of the same dimension as the geometries that is different from both of the geometries. |
Touches |
True if and only if the only common points of the two geometries are in the union of the boundaries of the geometries. |
Within |
True if the first geometry is wholly inside the second geometry. |
Extending Catalog Transformers
Transformers transform data to and from various formats. Transformers can be categorized on the basis of when they are invoked and used. The existing types are Input transformers, Metacard transformers, and Query Response transformers. Additionally, XSLT transformers are provided to aid in developing custom, lightweight Metacard and Query Response transformers. Transformers are utility objects used to transform a set of standard DDF components into a desired format, such as into PDF, GeoJSON, XML, or any other format. For instance, a transformer can be used to convert a set of query results into an easy-to-read GeoJSON format (GeoJSON Transformer) or convert a set of results into a RSS feed that can be easily published to a URL for RSS feed subscription. A major benefit of transformers is that they can be registered in the OSGi Service Registry so that any other developer can access them based on their standard interface and self-assigned identifier, referred to as its "shortname." Transformers are often used by endpoints for data conversion in a system standard way. Multiple endpoints can use the same transformer, a different transformer, or their own published transformer.
|
The current transformers do not support Non-Western Characters (e.g., Hebrew). If the data being transformed contains these characters, they may not be displayed properly. For example, the characters after transformation are not displayed properly (e.g., the word will show up as squares). In other words, transformers only work for UTF-8. It is recommend not to use international character sets. |
Working with Transformers
The ddf.catalog.transform package includes the InputTransformer, MetacardTransformer, and QueryResponseTransformer interfaces. All implementations can be accessed using the Catalog Framework or OSGi Service Registry, as long as the implementations have been registered with the Service Registry.
Catalog Framework
The CatalogFramework provides convenient methods to transform Metacards and QueryResponses using a reference to the CatalogFramework. See Working with the Catalog Framework for more details on the method signatures.
It is easy to execute the convenience transform methods on the CatalogFramework instance.
.Query Response Transform Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
// inject CatalogFramework instance or retrieve an instance
private CatalogFramework catalogFramework;
public RSSEndpoint(CatalogFramework catalogFramework)
{
this.catalogFramework = catalogFramework ;
// implementation
}
// Other implementation details ...
private void convert(QueryResponse queryResponse ) {
// ...
String transformerId = "rss";
BinaryContent content = catalogFramework.transform(queryResponse, transformerId, null);
// ...
}
| Line # | Action |
|---|---|
4 |
|
16 |
|
Dependency Injection
Using Blueprint or another injection framework, transformers can be injected from the OSGi Service Registry. See OSGi Services for more details on how to use injected instances.
<reference id="[[Reference Id]]" interface="ddf.catalog.transform.[[Transformer Interface Name]]" filter="(shortname=[[Transformer Identifier]])" />
Each transformer has one or more transform methods that can be used to get the desired output.
1
2
3
ddf.catalog.transform.InputTransformer inputTransformer = retrieveInjectedInstance() ;
Metacard entry = inputTransformer.transform(messageInputStream);
1
2
3
ddf.catalog.transform.MetacardTransformer metacardTransformer = retrieveInjectedInstance() ;
BinaryContent content = metacardTransformer.transform(metacard, arguments);
1
2
3
ddf.catalog.transform.QueryResponseTransformer queryResponseTransformer = retrieveInjectedInstance() ;
BinaryContent content = queryResponseTransformer.transform(sourceSesponse, arguments);
See Working with OSGi - Service Registry for more details.
OSGi Service Registry
|
In the vast majority of cases, working with the OSGi Service Reference directly should be avoided. Instead, dependencies should be injected via a dependency injection framework like Blueprint. |
Transformers are registered with the OSGi Service Registry. Using a BundleContext and a filter, references to a registered service can be retrieved.
1
2
3
4
ServiceReference[] refs =
bundleContext.getServiceReferences(ddf.catalog.transform.InputTransformer.class.getName(),"(shortname=" + transformerId + ")");
InputTransformer inputTransformer = (InputTransformer) context.getService(refs[0]);
Metacard entry = inputTransformer.transform(messageInputStream);
Included Input Transformers
An input transformer transforms raw data (text/binary) into a Metacard.
Once converted to a Metacard, the data can be used in a variety of ways, such as in an UpdateRequest, CreateResponse, or within Catalog Endpoints or Extending Sources. For instance, an input transformer could be used to receive and translate XML into a Metacard so that it can be placed within a CreateRequest in order to be ingested within the Catalog. Input transformers should be registered within the Service Registry with the following interface ddf.catalog.transform.InputTransformer in order to notify some Catalog components of any new transformers.
Tika Input Transformer
The Tika Input Transformer is the default input transformer responsible for translating Microsoft Word, Microsoft Excel, Microsoft PowerPoint, OpenOffice Writer, and PDF documents into a Catalog Metacard. This input transformer utilizes Apache Tika to provide basic support for these mime types. As such, the metadata extracted from these types of documents is the metadata that is common across all of these document types, e.g., creation date, author, last modified date, etc. The Tika Input Transformer’s main purpose is to ingest these types of content into the DDF Content Repository and the Metadata Catalog.
The Tika input transformer is given a service ranking (prioity) of -1 so that it is guaranteed to be the last input transformer that is invoked. This allows any registered input transformer that are more specific for any of these document types to be invoked instead of this rudimentary default input transformer.
Installing and Uninstalling
Install the catalog-transformer-tika feature using the Web Console (http://localhost:8181/system/console) or System Console. This feature is uninstalled by default.
Configuring
None
Using
Use the Tika Input Transformer for ingesting Microsoft documents, OpenOffice documents, or PDF documents into the DDF Content Repository and/or the Metadata Catalog.
Service Properties
| Key | Value |
|---|---|
mime-type |
application/pdf application/vnd.openxmlformats-officedocument.wordprocessingml.document application/vnd.openxmlformats-officedocument.spreadsheetml.sheet application/vnd.openxmlformats-officedocument.presentationml.presentation application/vnd.openxmlformats-officedocument.presentationml.presentation application/vnd.ms-powerpoint.presentation.macroenabled.12 application/vnd.ms-powerpoint.slideshow.macroenabled.12 application/vnd.openxmlformats-officedocument.presentationml.slideshow application/vnd.ms-powerpoint.template.macroenabled.12 application/vnd.oasis.opendocument.text |
shortname |
|
id |
|
title |
Tika Input Transformer |
description |
Default Input Transformer for all mime types. |
service.ranking |
-1 |
Implementation Details
This input transformer maps the metadata common across all mime types to applicable metacard attributes in the default MetacardType.
GeoJSON Input Transformer
The GeoJSON input transformer is responsible for translating specific GeoJSON into a Catalog metacard.
Installing and Uninstalling
Install the catalog-rest-endpoint feature using the Web Console (http://localhost:8181/system/console) or System Console.
Configuring
None
Using
Using the REST Endpoint, for example, HTTP POST a GeoJSON metacard to the Catalog. Once the REST Endpoint receives the GeoJSON Metacard, it is converted to a Catalog metacard.
curl -X POST -i -H "Content-Type: application/json" -d "@metacard.json" http://localhost:8181/services/catalog
Conversion
A GeoJSON object (http://geojson.org/geojson-spec.html#geojson-objects) consists of a single JSON object. The single JSON object can be a geometry, a feature, or a FeatureCollection. This input transformer only converts "feature" objects into metacards. This is a natural choice since feature objects include geometry information and a list of properties. For instance, if only a geometry object is passed, such as only a LineString, that is not enough information to create a metacard. This input transformer currently does not handle FeatureCollections either, but could be supported in the future.
|
Cannot create Metacard from this limited GeoJSON
|
The following sample will create a valid metacard: .Sample Parseable GeoJson (Point)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
{
"properties": {
"title": "myTitle",
"thumbnail": "CA==",
"resource-uri": "http://example.com",
"created": "2012-09-01T00:09:19.368+0000",
"metadata-content-type-version": "myVersion",
"metadata-content-type": "myType",
"metadata": "<xml></xml>",
"modified": "2012-09-01T00:09:19.368+0000"
},
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [
30.0,
10.0
]
}
}
In the current implementation, Metacard.LOCATION is not taken from the properties list as WKT, but instead interpreted from the geometry JSON object. The geometry object is formatted according to the GeoJSON (http://geojson.org/geojson-spec.html)standard. Dates are in the ISO 8601 standard. White space is ignored, as in most cases with JSON. Binary data is accepted as Base64. XML must be properly escaped, such as what is proper for normal JSON.
Only Required Attributes are recognized in the properties currently.
Metacard Extensibility
GeoJSON supports custom, extensible properties on the incoming GeoJSON. It uses DDF’s extensible metacard support to do this. To have those customized attributes understood by the system, a corresponding MetacardType must be registered with the MetacardTypeRegistry. That MetacardType must be specified by name in the metacard-type property of the incoming GeoJSON. If a MetacardType is specified on the GeoJSON input, the customized properties can be processed, cataloged, and indexed.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
{
"properties": {
"title": "myTitle",
"thumbnail": "CA==",
"resource-uri": "http://example.com",
"created": "2012-09-01T00:09:19.368+0000",
"metadata-content-type-version": "myVersion",
"metadata-content-type": "myType",
"metadata": "<xml></xml>",
"modified": "2012-09-01T00:09:19.368+0000",
"min-frequency": "10000000",
"max-frequency": "20000000",
"metacard-type": "ddf.metacard.custom.type"
},
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [
30.0,
10.0
]
}
}
When the GeoJSON Input Transformer gets GeoJSON with the MetacardType specified, it will perform a lookup in the MetacardTypeRegistry to obtain the specified MetacardType in order to understand how to parse the GeoJSON. If no MetacardType is specified, the GeoJSON Input Transformer will assume the default MetacardType. If an unregistered MetacardType is specified, an exception will be returned to the client indicating that the MetacardType was not found.
Packaging Details
Feature Information
N/A
Included Bundles
N/A
Services
Exported Services
mime-type |
application/json |
id |
geojson |
Implementation Details
Registered Interface |
Service Property |
Value |
|
mime-type |
application/json |
id |
geojson |
Known Issues
Does not handle multiple geometries yet.
Developing an Input Transformer
Using Java
-
Create a new Java class that implements ddf.catalog.transform.InputTransformer.
public class SampleInputTransformer implements ddf.catalog.transform.InputTransformer -
Implement the transform methods.
public Metacard transform(InputStream input) throws IOException, CatalogTransformerException
public Metacard transform(InputStream input, String id) throws IOException, CatalogTransformerException -
Import the DDF interface packages to the bundle manifest (in addition to any other required packages).
Import-Package: ddf.catalog,ddf.catalog.transform -
Create an OSGi descriptor file to communicate with the OSGi Service Registry (described in the Working with OSGi section). Export the service to the OSGi Registry and declare service properties.
1
2
3
4
5
6
7
8
9
...
<service ref="[[SampleInputTransformer]]" interface="ddf.catalog.transform.InputTransformer">
<service-properties>
<entry key="shortname" value="[[sampletransform]]" />
<entry key="title" value="[[Sample Input Transformer]]" />
<entry key="description" value="[[A new transformer for metacard input.]]" />
</service-properties>
</service>
...
-
Deploy OSGi Bundle to OSGi runtime.
Variable Descriptions
| Key | Description of Value | Example |
|---|---|---|
shortname |
(Required) An abbreviation for the return-type of the BinaryContent being sent to the user. |
atom |
title |
(Optional) A user-readable title that describes (in greater detail than the shortname) the service. |
Atom Entry Transformer Service |
description |
(Optional) A short, human-readable description that describes the functionality of the service and the output. |
This service converts a single metacard xml document to an atom entry element. |
Create an Input Transformer Using Apache Camel
Alternatively, make an Apache Camel route in a blueprint file and deploy it using a feature file or via hot deploy.
Design Pattern
From
When using from catalog:inputtransformer?id=text/xml, an Input Transformer will be created and registered in the OSGi registry with an id of text/xml.
To
When using to catalog:inputtransformer?id=text/xml, an Input Transformer with an id matching text/xml will be discovered from the OSGi registry and invoked.
Message Formats
Exchange Type |
Field |
Type |
Request (comes from <from> in the route) |
body |
java.io.InputStream |
Response (returned after called via <to> in the route) |
body |
|
Examples
1
2
3
4
5
6
7
8
9
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">
<camelContext xmlns="http://camel.apache.org/schema/blueprint">
<route>
<from uri="catalog:inputtransformer?mimeType=RAW(id=text/xml;id=vehicle)"/>
<to uri="xslt:vehicle.xslt" /> <!-- must be on classpath for this bundle -->
<to uri="catalog:inputtransformer?mimeType=RAW(id=application/json;id=geojson)" />
</route>
</camelContext>
</blueprint>
|
Its always a good idea to wrap the mimeType value with the RAW parameter as shown in the example above. This will ensure that the value is taken exactly as is, and is especially useful when you are using special characters. |
| Line Number | Description |
|---|---|
1 |
Defines this as an Apache Aries blueprint file. |
2 |
Defines the Apache Camel context that contains the route. |
3 |
Defines start of an Apache Camel route. |
4 |
Defines the endpoint/consumer for the route. In this case it is the DDF custom catalog component that is an InputTransformer registered with an id of text/xml;id=vehicle meaning it can transform an InputStream of vehicle data into a metacard. Note that the specified XSL stylesheet must be on the classpath of the bundle that this blueprint file is packaged in. |
5 |
Defines the XSLT to be used to transform the vehicle input into GeoJSON format using the Apache Camel provided XSLT component. |
|
An example of using an Apache Camel route to define an |
Included Metacard InputTransformers
A metacard transformer transforms a metacard into other data formats.
HTML Metacard Transformer
The HTML metacard transformer is responsible for translating a metacard into an HTML formatted document.
Installing and Uninstalling
Install the catalog-transformer-html feature using the Web Console (http://localhost:8181/system/console) or System Console.
Configuring
None
Using
Using the REST Endpoint for example, request a metacard with the transform option set to the HTML shortname.
http://localhost:8181/services/catalog/0123456789abcdef0123456789abcdef?transform=html
Example Output
html metacard.png
Implementation Details
| Registered Interface | Service Property | Value |
|---|---|---|
|
title |
View as html… |
description |
Transforms query results into html |
|
shortname (for backwards compatibility) |
html |
Known Issues
None
XML Metacard Transformer
The XML metacard transformer is responsible for translating a metacard into an XML-formatted document. The metacard element that is generated is an extension of gml:AbstractFeatureType, which makes the output of this transformer GML 3.1.1 compatible.
Installing and Uninstalling
This transformer comes installed out of the box and is running on startup. To install or uninstall manually, use the catalog-transformer-xml feature using the Web Console (http://localhost:8181/system/console) or System Console.
Configuring
None
Using
Using the REST Endpoint for example, request a metacard with the transform option set to the XML shortname.
http://localhost:8181/services/catalog/ac0c6917d5ee45bfb3c2bf8cd2ebaa67?transform=xml
Implementation Details
Metacard to XML Mappings
| Metacard Variables | XML Element |
|---|---|
id |
metacard/@gml:id |
metacardType |
metacard/type |
sourceId |
metacard/source |
all other attributes |
metacard/<AttributeType>[name='<AttributeName>']/value |
AttributeTypes
| XML Adapted Attributes |
|---|
boolean |
base64Binary |
dateTime |
double |
float |
geometry |
int |
long |
object |
short |
string |
stringxml |
Known Issues
None
GeoJSON Metacard Transformer
GeoJSON Metacard Transformer translates a Metacard into GeoJSON.
Installing and Uninstalling
Install the catalog-transformer-json feature using the Web Console (http://localhost:8181/system/console) or System Console.
Configuring
None
Using
The GeoJSON Metacard Transformer can be used programmatically by requesting a MetacardTransformer with the id geojson. It can also be used within the REST Endpoint by providing the transform option as geojson.
http://localhost:8181/services/catalog/0123456789abcdef0123456789abcdef?transform=geojson
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
{
"properties":{
"title":"myTitle",
"thumbnail":"CA==",
"resource-uri":"http:\/\/example.com",
"created":"2012-08-31T23:55:19.518+0000",
"metadata-content-type-version":"myVersion",
"metadata-content-type":"myType",
"metadata":"<xml>text<\/xml>",
"modified":"2012-08-31T23:55:19.518+0000",
"metacard-type": "ddf.metacard"
},
"type":"Feature",
"geometry":{
"type":"LineString",
"coordinates":[
[
30.0,
10.0
],
[
10.0,
30.0
],
[
40.0,
40.0
]
]
}
}
Implementation Details
| Registered Interface | Service Property | Value |
|---|---|---|
ddf.catalog.transform.MetacardTransformer |
mime-type |
application/json |
id |
geojson |
|
shortname (for backwards compatibility) |
geojson |
Known Issues
None
Thumbnail Metacard Transformer
The Thumbnail Metacard Transformer retrieves the thumbnail bytes of a Metacard by returning the Metacard.THUMBNAIL attribute value.
Installing and Uninstalling
This transformer is installed out of the box. To uninstall the transformer, you must stop or uninstall the bundle.
Configuring
None
Using
Endpoints or other components can retrieve an instance of the Thumbnail Metacard Transformer using its id thumbnail.
1
<reference id="metacardTransformer" interface="ddf.catalog.transform.MetacardTransformer" filter="(id=thumbnail)"/>
The Thumbnail Metacard Transformer returns a BinaryContent object of the Metacard.THUMBNAIL bytes and a MIME Type of image/jpeg.
Implementation Details
Service Property |
Value |
id |
thumbnail |
shortname |
thumbnail |
mime-type |
image/jpeg |
Known Issues
None
Metadata Metacard Transformer
The Metadata Metacard Transformer returns the Metacard.METADATA attribute when given a metacard. The MIME Type returned is text/xml.
Installing and Uninstalling
Catalog Transformers application will install this feature when deployed. This transformer’s feature, catalog-transformer-metadata, can be uninstalled or installed using the normal processes described in the Configuring DDF section of this documentation.
Configuring
None
Using
The Metadata Metacard Transformer can be used programmatically by requesting a MetacardTransformer with the id metadata. It can also be used within the REST Endpoint by providing the transform option as metadata.
http://localhost:8181/services/catalog/0123456789abcdef0123456789abcdef?transform=metadata
Implementation Details
| Registered Interface | Service Property | Value |
|---|---|---|
ddf.catalog.transform.MetacardTransformer |
mime-type |
text/xml |
id |
metadata |
|
shortname (for backwards compatibility) |
metadata |
Known Issues
None.
Resource Metacard Transformer
The Resource Metacard Transformer retrieves the resource bytes of a metacard by returning the product associated with the metacard.
Installing and Uninstalling
This transformer is installed by installing the feature associated with the transformer "catalog-transformer-resource". To uninstall the transformer, you must uninstall the feature "catalog-transformer-resource".
Configuring
None
Using
Endpoints or other components can retrieve an instance of the Resource Metacard Transformer using its id resource.
<reference id="metacardTransformer" interface="ddf.catalog.transform.MetacardTransformer" filter="(id=resource)"/>
Implementation Details
| Service Property | Value | id |
|---|---|---|
resource |
shortname |
resource |
mime-type |
application/octet-stream |
title |
Known Issues
None
Developing a Metacard Transformer
In general, a MetacardTransformer is used to transform a Metacard into some desired format useful to the end user or as input to another process. Programmatically, a MetacardTransformer transforms a Metacard into a BinaryContent instance, which contains the translated Metacard into the desired final format. Metacard transformers can be used through the Catalog Framework transform convenience method or requested from the OSGi Service Registry by endpoints or other bundles.
Create a New Metacard Transformer
-
Create a new Java class that implements
ddf.catalog.transform.MetacardTransformer.
public class SampleMetacardTransformer implements ddf.catalog.transform.MetacardTransformer -
Implement the transform method.
public BinaryContent transform(Metacard metacard, Map<String, Serializable> arguments) throws CatalogTransformerException -
Import the DDF interface packages to the bundle manifest (in addition to any other required packages).
Import-Package: ddf.catalog,ddf.catalog.transform -
Create an OSGi descriptor file to communicate with the OSGi Service registry (described in the Working with OSGi section). Export the service to the OSGi registry and declare service properties.
1
2
3
4
5
6
7
8
9
...
<service ref="[[SampleMetacardTransformer]]" interface="ddf.catalog.transform.MetacardTransformer">
<service-properties>
<entry key="shortname" value="[[sampletransform]]" />
<entry key="title" value="[[Sample Metacard Transformer]]" />
<entry key="description" value="[[A new transformer for metacards.]]" />
</service-properties>
</service>
...
Deploy OSGi Bundle to OSGi runtime.
Variable Descriptions
| Key | Description of Value | Example |
|---|---|---|
shortname |
(Required) An abbreviation for the return type of the BinaryContent being sent to the user. |
atom |
title |
(Optional) A user-readable title that describes (in greater detail than the shortname) the service. |
Atom Entry Transformer Service |
description |
(Optional) A short, human-readable description that describes the functionality of the service and the output. |
This service converts a single metacard xml document to an atom entry element. |
Included Query Response Transformers
Query Response transformers convert query responses into other data formats.
Atom Query Response Transformer
The Atom Query Response Transformer transforms a query response into an Atom 1.0 (http://tools.ietf.org/html/rfc4287) feed. The Atom transformer maps a QueryResponse object as described in the Query Result Mapping.
Installing and Uninstalling
Catalog Transformers application will install this feature when deployed. This transformer’s feature, catalog-transformer-atom, can be uninstalled or installed using the normal processes described in the Configuring DDF section.
Configuring
none.
Using
Use this transformer when Atom is the preferred medium of communicating information, such as for feed readers or federation. An integrator could use this with an endpoint to transform query responses into an Atom feed.
For example, clients can use the OpenSearch Endpoint (https://tools.codice.org/#). The client can query with the format option set to the shortname, atom.
http://localhost:8181/services/catalog/query?q=ddf?format=atom
Developers could use this transformer to programmatically transform QueryResponse objects on the fly. (See Implementation Details for details about acquiring the service.)
Sample Results
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:os="http://a9.com/-/spec/opensearch/1.1/">
<title type="text">Query Response</title>
<updated>2013-01-31T23:22:37.298Z</updated>
<id>urn:uuid:a27352c9-f935-45f0-9b8c-5803095164bb</id>
<link href="#" rel="self" />
<author>
<name>Lockheed Martin</name>
</author>
<generator version="2.1.0.20130129-1341">ddf123</generator>
<os:totalResults>1</os:totalResults>
<os:itemsPerPage>10</os:itemsPerPage>
<os:startIndex>1</os:startIndex>
<entry xmlns:relevance="http://a9.com/-/opensearch/extensions/relevance/1.0/" xmlns:fs="http://a9.com/-/opensearch/extensions/federation/1.0/"
xmlns:georss="http://www.georss.org/georss">
<fs:resultSource fs:sourceId="ddf123" />
<relevance:score>0.19</relevance:score>
<id>urn:catalog:id:ee7a161e01754b9db1872bfe39d1ea09</id>
<title type="text">F-15 lands in Libya; Crew Picked Up</title>
<updated>2013-01-31T23:22:31.648Z</updated>
<published>2013-01-31T23:22:31.648Z</published>
<link href="http://123.45.67.123:8181/services/catalog/ddf123/ee7a161e01754b9db1872bfe39d1ea09" rel="alternate" title="View Complete Metacard" />
<category term="Resource" />
<georss:where xmlns:gml="http://www.opengis.net/gml">
<gml:Point>
<gml:pos>32.8751900768792 13.1874561309814</gml:pos>
</gml:Point>
</georss:where>
<content type="application/xml">
<ns3:metacard xmlns:ns3="urn:catalog:metacard" xmlns:ns2="http://www.w3.org/1999/xlink" xmlns:ns1="http://www.opengis.net/gml"
xmlns:ns4="http://www.w3.org/2001/SMIL20/" xmlns:ns5="http://www.w3.org/2001/SMIL20/Language" ns1:id="4535c53fc8bc4404a1d32a5ce7a29585">
<ns3:type>ddf.metacard</ns3:type>
<ns3:source>ddf.distribution</ns3:source>
<ns3:geometry name="location">
<ns3:value>
<ns1:Point>
<ns1:pos>32.8751900768792 13.1874561309814</ns1:pos>
</ns1:Point>
</ns3:value>
</ns3:geometry>
<ns3:dateTime name="created">
<ns3:value>2013-01-31T16:22:31.648-07:00</ns3:value>
</ns3:dateTime>
<ns3:dateTime name="modified">
<ns3:value>2013-01-31T16:22:31.648-07:00</ns3:value>
</ns3:dateTime>
<ns3:stringxml name="metadata">
<ns3:value>
<ns6:xml xmlns:ns6="urn:sample:namespace" xmlns="urn:sample:namespace">Example description.</ns6:xml>
</ns3:value>
</ns3:stringxml>
<ns3:string name="metadata-content-type-version">
<ns3:value>myVersion</ns3:value>
</ns3:string>
<ns3:string name="metadata-content-type">
<ns3:value>myType</ns3:value>
</ns3:string>
<ns3:string name="title">
<ns3:value>Example title</ns3:value>
</ns3:string>
</ns3:metacard>
</content>
</entry>
</feed>
Query Result Mapping
| XPath to Atom XML | Value |
|---|---|
/feed/title |
"Query Response" |
/feed/updated |
ISO 8601 dateTime of when the feed was generated |
/feed/id |
Generated UUID URN (http://en.wikipedia.org/wiki/Universally_Unique_Identifier) |
/feed/author/name |
Platform Global Configuration organization |
/feed/generator |
Platform Global Configuration site name |
/feed/generator/@version |
Platform Global Configuration version |
/feed/os:totalResults |
SourceResponse Number of Hits |
/feed/os:itemsPerPage |
Request’s Page Size |
/feed/os:startIndex |
Request’s Start Index |
/feed/entry/fs:resultSource/@fs:sourceId |
Source Id from which the Result came. Metacard.getSourceId() |
/feed/entry/relevance:score |
Result’s relevance score if applicable. |
/feed/entry/id |
urn:catalog:id:<Metacard.ID> |
/feed/entry/title |
Metacard.TITLE |
/feed/entry/updated |
ISO 8601 dateTime of Metacard.MODIFIED |
/feed/entry/published |
ISO 8601 dateTime of Metacard.CREATED |
/feed/entry/link[@rel='related'] |
URL to retrieve underlying resource (if applicable and link is available) |
/feed/entry/link[@rel='alternate'] |
Link to alternate view of the Metacard (if a link is available) |
/feed/entry/category |
Metacard.CONTENT_TYPE |
/feed/entry//georss:where |
GeoRSS GML of every Metacard attribute with format AttributeFormat.GEOMETRY |
/feed/entry/content |
Metacard XML generated by ddf.catalog.transform.MetacardTransformer with shortname=xml. If no transformer found, /feed/entry/content/@type will be text and Metacard.ID is displayed Sample Content with no Metacard Transformation
|
XML Query Response Transformer
The XML Query Response Transformer is responsible for translating a query response into an XML formatted document. The metacards element that is generated is an extension of gml:AbstractFeatureCollectionType, which makes the output of this transformer GML 3.1.1 compatible.
Installing and Uninstalling
This transformer comes installed out of the box and is running on start up. To uninstall or install manually, use the catalog-transformer-xml feature Web Console (http://localhost:8181/system/console) or System Console.
Configuring
None
Using
Using the OpenSearch Endpoint for example, query with the format option set to the XML shortname xml.
http://localhost:8181/services/catalog/query?q=input?format=xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<ns3:metacards xmlns:ns1="http://www.opengis.net/gml" xmlns:ns2="http://www.w3.org/1999/xlink" xmlns:ns3="urn:catalog:metacard" xmlns:ns4="http://www.w3.org/2001/SMIL20/" xmlns:ns5="http://www.w3.org/2001/SMIL20/Language">
<ns3:metacard ns1:id="000ba4dd7d974e258845a84966d766eb">
<ns3:type>ddf.metacard</ns3:type>
<ns3:source>southwestCatalog1</ns3:source>
<ns3:dateTime name="created">
<ns3:value>2013-04-10T15:30:05.702-07:00</ns3:value>
</ns3:dateTime>
<ns3:string name="title">
<ns3:value>Input 1</ns3:value>
</ns3:string>
</ns3:metacard>
<ns3:metacard ns1:id="00c0eb4ba9b74f8b988ef7060e18a6a7">
<ns3:type>ddf.metacard</ns3:type>
<ns3:source>southwestCatalog1</ns3:source>
<ns3:dateTime name="created">
<ns3:value>2013-04-10T15:30:05.702-07:00</ns3:value>
</ns3:dateTime>
<ns3:string name="title">
<ns3:value>Input 2</ns3:value>
</ns3:string>
</ns3:metacard>
</ns3:metacards>
Implementation Details
| Registered Interface | Service Property | Value |
|---|---|---|
ddf.catalog.transform.QueryResponseTransformer |
shortname |
xml |
description |
Transforms query results into xml |
|
title |
View as XML… |
See XML Metacard Transformer Implementation Details as to how metacard Java object information is mapped into XML.
Known Issues
None
SearchUI
The SearchUI is a QueryResponseTransformer that not only provides results in html format but also provides a convenient, simple querying user interface. It is primarily used as a test tool and verification of configuration. The left pane of the SearchUI contains basic fields to query the Catalog and other Sources. The right pane consists of the results returned from the query.
Installing and Uninstalling
Catalog Transformers App will install this feature when deployed. This transformer’s feature, catalog-transformer-ui, can be uninstalled or installed using the normal processes described in the Configuring DDF section.
Configuring
In the Admin Console the SearchUI can be configured under the Catalog HTML Query Response Transformer.
| Title | Property | Type | Description | Default Value | Required |
|---|---|---|---|---|---|
Header |
header |
String |
Specifies the header text to be rendered on the SearchUI |
yes |
|
Footer |
footer |
String |
Specifies the footer text to be rendered on the SearchUI |
yes |
|
Template |
template |
String |
Specifies the path to the Template |
/templates/searchpage.ftl |
|
Text Color |
color |
String |
Specifies the Text Color of the Header and Footer |
yellow |
yes |
Background Color |
background |
String |
Specifies the Background Color of the Header and Footer |
green |
yes |
Using
In order to obtain the SearchUI, a user must use the transformer with an endpoint that queries the Catalog such as the OpenSearch Endpoint. If a distribution is running locally, clicking on the following link http://localhost:8181/search/simple should bring up the Simple Search UI. After the page has loaded, enter the desired search criteria in the appropriate fields. Then click the "Search" button in order to execute the search on the Catalog. The "Clear" button will reset the query criteria specified.
Query Response Result Mapping
| SearchUI Column Title | Catalog Result | Notes |
|---|---|---|
Title |
Metacard.TITLE |
The title maybe hyperlinked to view the full Metacard |
Source |
Metacard.getSourceId() |
Source where the Metacard was discoved |
Location |
Metacard.LOCATION |
Geographical location of the Metacard |
Time |
Metacard.CREATED or Metacard.EFFECTIVE |
Time received/created |
Thumbnail |
Metacard.THUMBNAIL |
No column shown if no results have thumbnail |
Resource |
Metacard.RESOURCE_URI |
No column shown if no results have a resource |
Search Criteria
The SearchUI allows for querying a Catalog in the following methods:
-
Keyword Search - searching with keywords using the grammar of the underlying endpoint/Catalog.
-
Temporal Search - searching based on relative or absolute time.
-
Spatial search - searching spatially with a Point-Radius or Bounding Box.
-
Content Type Search - searching for specific Metacard.CONTENT_TYPE values
Known Issues
If the SearchUI results do not provide usable links on the metacard results, verify that a valid host has been entered in the Platform Global Configuration.
Developing a Query Response Transformer
A QueryResponseTransformer is used to transform a List of Results from a SourceResponse. Query Response Transformers can be used through the Catalog transform convenience method or requested from the OSGi Service Registry by endpoints or other bundles.
Create a New Query Response Transformer
-
Create a new Java class that implements
ddf.catalog.transform.QueryResponseTransformer.
public class SampleResponseTransformer implements ddf.catalog.transform.QueryResponseTransformer -
Implement the transform method.
public BinaryContent transform(SourceResponse upstreamResponse, Map<String, Serializable> arguments) throws CatalogTransformerException -
Import the DDF interface packages to the bundle manifest (in addition to any other required packages).
Import-Package: ddf.catalog, ddf.catalog.transform -
Create an OSGi descriptor file to communicate with the OSGi Service Registry (described in the Working with OSGi section). Export the service to the OSGi registry and declare service properties.
1
2
3
4
5
6
7
8
9
...
<service ref="[[SampleResponseTransformer]]" interface="ddf.catalog.transform.QueryResponseTransformer">
<service-properties>
<entry key="shortname" value="[[sampletransform]]" />
<entry key="title" value="[[Sample Response Transformer]]" />
<entry key="description" value="[[A new transformer for response queues.]]" />
</service-properties>
</service>
...
-
Deploy OSGi Bundle to OSGi runtime.
Variable Descriptions
Blueprint Service properties
| Key | Description of Value | Example |
|---|---|---|
shortname |
An abbreviation for the return-type of the BinaryContent being sent to the user. |
atom |
title |
A user-readable title that describes (in greater detail than the shortname) the service. |
Atom Entry Transformer Service |
description |
A short, human-readable description that describes the functionality of the service and the output. |
This service converts a single metacard xml document to an atom entry element. |
XSLT Transformer
XSLT Transformer Framework
The XSLT Transformer Framework allows developers to create light-weight Query Response Transformers and Metacard Transformers using only a bundle header and XSLT files. The XSLT Transformer Framework registers bundles, following the XSLT Transformer Framework bundle pattern, as new transformer services. The service-xslt-transformer feature is part of the DDF core.
Examples
Examples of XSLT Transformers using the XSLT Transformer Framework include service-atom-transformer and service-html-transformer, found in the services folder of the source code trunk.
Developing an XSLT Transformer
The XSLT Transformer Framework allows developers to create light-weight Query Response Transformers using only a bundle header and XSLT files. The XSLT Transformer Framework registers bundles, following the XSLT Transformer Framework bundle pattern, as new transformer services. The service-xslt-transformer feature is part of the DDF core.
Examples
Examples of XSLT Transformers using the XSLT Transformer Framework include service-atom-transformer and service-html-transformer, found in the services folder of the source code trunk.
Implement an XSLT Transformer
-
Create a new Maven project.
-
Configure the POM to create a bundle using the Maven bundle plugin.
-
Add the transform output MIME type to the bundle headers.
-
-
Add XSLT files.
Bundle POM Configuration
Configure the Maven project to create an OSGi bundle using the maven-bundle-plugin. Change the DDF-Mime-Type to match the MIME type of the transformer output.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
...
<build>
<plugins>
<plugin>
<groupId>org.apache.felix</groupId>
<artifactId>maven-bundle-plugin</artifactId>
<extensions>true</extensions>
<configuration>
<instructions>
<DDF-Mime-Type>[[Transform Result MIME Type]]</DDF-Mime-Type>
<Bundle-SymbolicName>docs</Bundle-SymbolicName>
<Import-Package />
<Export-Package />
</instructions>
</configuration>
</plugin>
</plugins>
</build>
...
Including XSLT
The XSLT Transformer Framework will scan for XSLT files inside a bundle. The XSLT file must have a .xsl or .xslt file in the correct directory location relative to the root of the bundle. The path depends on if the XSLT will act as a Metacard Transformer, Query Response Transformer, or both. The name of the XSLT file will be used as the transformer’s shortname.
1
2
3
4
5
6
7
8
9
10
11
12
13
// Metacard Transformer
<bundle root>
/OSGI-INF
/ddf
/xslt-metacard-transformer
/<transformer shortname>.[xsl|xslt]
// Query Response Transformer
<bundle root>
/OSGI-INF
/ddf
/xslt-response-queue-transformer
/<transformer shortname>.[xsl|xslt]
The XSLT file has access to metacard or Query Reponse XML data, depending on which folder the XSLT file is located. The Metacard XML format will depend on the metadata schema used by the Catalog Provider.
For Query Response XSLT Transformers, the available XML data for XSLT transform has the following structure:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
<results>
<metacard>
<id>[[Metacard ID]]</id>
<score>[[Relevance score]]</score>
<distance>[[Distance from query location]]</distance>
<site>[[Source of result]]</site>
<type qualifier="type">[[Type]]</type>
<updated>[[Date last updated]]</updated>
<geometry>[[WKT geometry]]</geometry>
<document>
[[Metacard XML]]
</document>
</metacard>
...
</results>
The XSLT file has access to additional parameters. The Map<String, Serializable> arguments from the transform method parameters is merged with the available XSLT parameters.
-
Query Response Transformers
-
grandTotal- total result count
-
-
Metacard Transformers
-
id- metacard ID -
siteName- source ID -
services- list of displayable titles and URLs of available metacard transformers
-
RSS Example
-
Create a Maven project named
service-rss-transformer. -
Add the following to its POM file.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<packaging>bundle</packaging>
<modelVersion>4.0.0</modelVersion>
<parent>
<artifactId>services</artifactId>
<groupId>ddf</groupId>
<version>[[DDF release version]]</version>
</parent>
<groupId>ddf.services</groupId>
<artifactId>service-rss-transformer</artifactId>
<build>
<plugins>
<plugin>
<groupId>org.apache.felix</groupId>
<artifactId>maven-bundle-plugin</artifactId>
<extensions>true</extensions>
<configuration>
<instructions>
<DDF-Mime-Type>application/rss+xml</DDF-Mime-Type>
<Bundle-SymbolicName>docs</Bundle-SymbolicName>
<Import-Package />
<Export-Package />
</instructions>
</configuration>
</plugin>
</plugins>
</build>
</project>
| Line # | Comment |
|---|---|
8 |
Use the current release version. |
21 |
Set the MIME type to the RSS MIME type. |
-
Add service-rss-transformer/src/main/resources/OSGI-INF/ddf/xslt-response-queue-transformer/rss.xsl. The transformer will be a Query Response Transformer with the shortname rss based on the XSL filename and path.
-
Add the following XSL to the new file.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="2.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:gml="http://www.opengis.net/gml" exclude-result-prefixes="xsl gml">
<xsl:output method="xml" version="1.0" indent="yes" />
<xsl:param name="grandTotal" />
<xsl:param name="url" />
<xsl:template match="/">
<xsl:apply-templates />
</xsl:template>
<xsl:template match="results">
<rss version="2.0">
<channel>
<title>Query Results</title>
<link><xsl:value-of select="$url" disable-output-escaping="yes" /></link>
<description>Query Results of <xsl:value-of select="count(//metacard)" /> out of <xsl:value-of select="$grandTotal" /></description>
<xsl:for-each select="metacard/document">
<item>
<guid>
<xsl:value-of select="../id" />
</guid>
<title>
<xsl:value-of select="Data/title" />
</title>
<link>
<xsl:value-of select="substring-before($url,'/services')" /><xsl:text>/services/catalog/</xsl:text><xsl:value-of select="../id" /><xsl:text>?transform=html</xsl:text>
</link>
<description>
<xsl:value-of select="//description" />
</description>
<author>
<xsl:choose>
<xsl:when test="Data/creator">
<xsl:value-of select="Resource/creator//name" />
</xsl:when>
<xsl:when test="Data/publisher">
<xsl:value-of select="Data/publisher//name" />
</xsl:when>
<xsl:when test="Data/unknown">
<xsl:value-of select="Data/unknown//name" />
</xsl:when>
</xsl:choose>
</author>
<xsl:if test=".//@posted" >
<pubDate>
<xsl:value-of select=".//posted" />
</pubDate>
</xsl:if>
</item>
</xsl:for-each>
</channel>
</rss>
</xsl:template>
</xsl:stylesheet>
| Line # | Comment |
|---|---|
8-9 |
Example of using additional parameters and arguments. |
15 |
Example of using the Query Response XML data. |
21,27 |
Example of using the Metacard XML data. |
Extending Federation
Federation provides the capability to extend the DDF enterprise to include Remote Sources, which may include other instances of DDF. The Catalog handles all aspects of federated queries as they are sent to the Catalog Provider and Remote Sources, processed, and the query results are returned. Queries can be scoped to include only the local Catalog Provider (and any Connected Sources), only specific Federated Sources, or the entire enterprise (which includes all local and Remote Sources). If the query is supposed to be federated, the Catalog Framework passes the query to a Federation Strategy, which is responsible for querying each federated source that is specified. The Catalog Framework is also responsible for receiving the query results from each federated source and returning them to the client in the order specified by the particular federation strategy used. After the federation strategy handles the results, the Catalog returns them to the client through the Endpoint. Query results returned from a federated query are a list of metacards. The source ID in each metacard identifies the Source from which the metacard originated.
The Catalog normalizes the incoming query into an OGC Filter format. When the query is disseminated by the Catalog Framework to the sources, each source is responsible for denormalizing the OGC Filter formatted query into the format understood by the external store that the source is acting as a proxy. This normalization/denormalization is what allows any endpoint to interface with any type of source. For example, a query received by the OpenSearch Endpoint can be executed against an OpenSearch Source.
Federation Strategy
A federation strategy federates a query to all of the Remote Sources in the query’s list, processes the results in a unique way, then returns the results to the client. For example, implementations can choose to block until all results return then perform a mass sort or return the results back to the client as soon as they are received back from a Federated Source.
Usage
An endpoint can optionally specify the federation strategy to use when it invokes the query operation. Otherwise, the Catalog provides a default federation strategy that will be used.
Catalog Federation Strategy
The Catalog Federation Strategy is the default federation strategy and is based on sorting metacards by the sorting parameter specified in the federated query.
The possible sorting values are:
-
metacard’s effective date/time
-
temporal data in the query result
-
distance data in the query result
-
relevance of the query result
The supported sorting orders are ascending and descending.
The default sorting value/order automatically used is relevance descending.
|
The Catalog Federation Strategy expects the results returned from the Source to be sorted based on whatever sorting criteria were specified. If a metadata record in the query results contains null values for the sorting criteria elements, the Catalog Federation Strategy expects that result to come at the end of the result list. |
Configuration
The Catalog Federation Strategy configuration can be found in the web console under Configuration → Catalog Federation Strategy.
| Property | Type | Description | Default Value | Required |
|---|---|---|---|---|
maxStartIndex |
Integer |
The maximum query offset number (any number from 1 to unlimited). Setting the number too high would allow offset queries that could result in an out of memory error because the DDF will cycle through all records in memory. Things to consider when setting this value are:
|
50000 |
yes |
expirationIntervalInMinutes |
Long |
Interval that Solr Cache checks for expired documents to remove. |
10 |
yes |
expirationAgeInMinutes |
Long |
The number of minutes a document will remain in the cache before it will expire. Default is 7 days. |
10080 |
yes |
url |
String |
HTTP URL of Solr 4.x Server |
yes |
|
cachingEverything |
Boolean |
Cache all results unless configured as native |
false |
yes |
| Managed Service PID | ddf.catalog.federation.impl.CachingFederationStrategy |
|---|---|
Managed Service Factory PID |
N/A |
Extending Eventing
The Eventing capability of the Catalog allows endpoints (and thus external users) to create a "standing query" and be notified when a matching metacard is created, updated, or deleted.
Notably, the Catalog allows event evaluation on both the previous value (if available) and new value of a Metacard when an update occurs.
To better understand why this would be useful, suppose that there has been increased pirating activity off the coast of Somalia. Because of these events, a group of intelligence analysts is interested in determining the reason for the heightened activity and discovering its cause. To do this, analysts need to monitor interesting events occurring in that area. Without DDF Eventing, the analysts would need to repeatedly query for any records of events or intelligence gathered in that area. Analysts would have to monitor changes or anything of interest. However, with DDF Eventing, the analysts can create a subscription indicating criteria for the types of intelligence of interest. In this scenario, analysts could specify interest in metacards added, updated, or deleted that describe data obtained around the coast of Somalia. Through this subscription, DDF will send event notifications back to the team of analysts containing metadata of interest. Furthermore, they could filter the records not only spatially, but by any other criteria that would zero in on the most interesting records. For example, a fishing company that has operated ships peacefully in the same region for a long time may not be interesting. To exclude metadata about that company, analysts may add contextual criteria indicating to return only records containing the keyword "pirate." With the subscription in place, analysts will only be notified of metadata related to the pirating activity, giving them better situational awareness.
The key components of DDF Eventing include:
-
Subscription
-
Delivery Method
-
Event Processor
After reading this section, you will be able to:
-
Create new subscriptions
-
Register subscriptions
-
Perform operations on event notification
-
Remove a subscription
Subscription
Subscriptions represent "standing queries" in the Catalog. Like a query, subscriptions are based on the OGC Filter specification.
Subscription Lifecycle
Creation
-
Subscriptions are created directly with the Event Processor or declaratively through use of the Whiteboard Design Pattern.
-
The Event Processor will invoke each Pre-Subscription Plugin and, if the subscription is not rejected, the subscription will be activated.
Evaluation
-
When a metacard matching the subscription is created, updated, or deleted in any Source, each Pre-Delivery Plugin will be invoked.
-
If the delivery is not rejected, the associated Delivery Method callback will be invoked.
Update Evaluation
Notably, the Catalog allows event evaluation on both the previous value (if available) and new value of a Metacard when an update occurs.
Durability
Subscription durability is not provided by the Event Processor. Thus, all subscriptions are transient and will not be recreated in the event of a system restart. It is the responsibility of Endpoints using subscriptions to persist and re-establish the subscription on startup. This decision was made for the sake of simplicity, flexibility, and the inability of the Event Processor to recreate a fully-configured Delivery Method without being overly restrictive.
|
Subscriptions are not persisted by the Catalog itself. |
Creating a Subscription
Currently, the Catalog reference implementation does not contain a subscription endpoint. Nevertheless, an endpoint that exposes a web service interface to create, update, and delete subscriptions would provide a client’s subscription’s filtering criteria to be used by Catalog’s Event Processor to determine which create, update, and delete events are of interest to the client. The endpoint client also provides the callback URL of the event consumer to be called when an event matching the subscription’s criteria is found. This callback to the event consumer is made by a Delivery Method implementation that the client provides when the subscription is created. Whenever an event occurs in the Catalog matching the subscription, the Delivery Method implementation will be called by the Event Processor. The Delivery Method will, in turn, send the event notification out to the event consumer. As part of the subscription creation process, the Catalog verifies that the event consumer at the specified callback URL is available to receive callbacks. Therefore, the client must ensure the event consumer is running prior to creating the subscription. The Catalog completes the subscription creation by executing any pre-subscription Catalog Plugins, and then registering the subscription with the OSGi Service Registry. The Catalog does not persist subscriptions by default.
Delivery Method
A Delivery Method provides the operation (created, updated, deleted) for how an event’s metacard can be delivered.
A Delivery Method is associated with a subscription and contains the callback URL of the event consumer to be notified of events. The Delivery
Method encapsulates the operations to be invoked by the Event Processor when an event matches the criteria for the subscription. The Delivery Method’s operations are responsible for invoking the corresponding operations on the event consumer associated with the callback URL.
Event Processor
The Event Processor provides an engine that creates, updates, and deletes subscriptions for event notification. These subscriptions optionally specify a filter criteria so that only events of interest to the subscriber are posted for notification.
An internal subscription tracker monitors the OSGi registry, looking for subscriptions to be added (or deleted). When it detects a subscription being added, it informs the Event Processor, which sets up the subscription’s filtering and is responsible for posting event notifications to the subscriber when events satisfying their criteria are met.
Event Processing and Notification
As metacards are created, updated, and deleted, the Catalog’s Event Processor is invoked (as a post-ingest plugin) for each of these events. TheEvent Processor applies the filter criteria for each registered subscription to each of these ingest events to determine if they match the criteria. If an event matches a subscription’s criteria, any pre-delivery plugins that are installed are invoked, the subscription’s Delivery Method is retrieved, and its operation corresponding to the type of ingest event is invoked. For example, the DeliveryMethod’s created() function is called when a metacard is created. The Delivery Method’s operations subsequently invoke the corresponding operation in the client’s event consumer service, which is specified by the callback URL provided when the Delivery Method was created.
Standard Event Processor
The Standard Event Processor is an implementation of the Event Processor and provides the ability to create/delete subscriptions. Events are generated by the DDF Catalog Framework as metacards are created/updated/deleted and the Standard Event Processor is called since it is also a Post-Ingest Plugin. The Standard Event Processor checks each event against each subscription’s criteria.
When an event matches a subscription’s criteria the Standard Event Processor:
-
invokes each pre-delivery plugin on the metacard in the event
-
invokes the Delivery Method’s operation corresponding to the type of event being processed, e.g., created operation for the creation of a metacard
Installing and Uninstalling
The StandardEvent Processor is automatically installed/uninstalled when the Standard Catalog Framework is installed/uninstalled.
Known Issues
The Standard Event processor currently broadcasts federated events and should not. It should only broadcast events that were generated locally, all other events should be dropped.
Fanout Event Processor
The Fanout Event Processor is used when DDF is configured as a fanout proxy. The only difference between the Fanout Event Processor and the Standard Event Processor is that the source ID in the metacard of each event is overridden with the fanout’s source ID. This is done to hide the source names of the Remote Sources in the fanout’s enterprise. Otherwise, the Fanout Event Processor functions exactly like the Standard Event Processor. ===== Installing and Uninstalling
The Fanout Event Processor is automatically installed/uninstalled when the Catalog Fanout Framework App is installed/uninstalled.
Known Issues
None
Working with Subscriptions
Creating a Subscription
Using DDF Implementation
If applicable, the implementation of Subscription that comes with DDF should be used. It is available at ddf.catalog.event.impl.SubscriptionImpl and offers a constructor that takes in all of the necessary objects. Specifically, all that is needed is a Filter, DeliveryMethod, Set<String> of source IDs, and a boolean for enterprise.
The following is an example code stub showing how to create a new instance of Subscription using the DDF implementation.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// Create a new filter using an imported FilterBuilder
Filter filter = filterBuilder.attribute(Metacard.ANY_TEXT).like().text("*");
// Create a implementation of Delivery Method
DeliveryMethod deliveryMethod = new MyCustomDeliveryMethod();
// Create a set of source ids
// This set is empty as the subscription is not specific to any sources
Set<String> sourceIds = new HashSet<String>();
// Set the isEnterprise boolean value
// This subscription example should notifications from all sources (not just local)
boolean isEnterprise = true;
Subscription subscription = new SubscriptionImpl(filter, deliveryMethod, sourceIds,isEnterprise);
Creating a Custom Implementation
To create a subscription in DDF the developer needs to implement the ddf.catalog.event.Subscription interface. This interface extends org.opengis.filter.Filter in order to represent the subscription’s filter criteria. Furthermore, the Subscription interface contains a DeliveryMethod implementation.
When implementing Subscription, the developer will need to override the methods accept and evaluate from the Filter. The accept method allows the visitor pattern to be applied to the Subscription. A FilterVisitor can be passed into this method in order to process the Subscription’s Filter. In DDF, this method is used to convert the Subscription’s Filter into a predicate format that is understood by the Event Processor. The second method inherited from Filter is evaluate. This method is used to evaluate an object against the `Filter’s criteria in order to determine if it matches the criteria. See the Creating Filters section of the Developer’s Guide for more information on OGC filters.
|
The functionality of these overridden methods is typically delegated to the |
The developer must also define getDeliveryMethod. This class is called when the an event occurs that matches the filter of the subscription. More information on how to create a DeliveryMethod is in the Creating A Delivery Method section of this page.
The other two methods required because Subscription implements Federatable are isEnterprise and getSourceIds, which indicate that the subscription should watch for events occurring on all sources in the enterprise or on specified sources.
The following is an implementation stub of Subscription that comes with DDF and is available at ddf.catalog.event.impl.SubscriptionImpl.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
public class SubscriptionImpl implements Subscription {
private Filter filter;
private DeliveryMethod dm;
private Set<String> sourceIds;
private boolean enterprise;
public SubscriptionImpl(Filter filter, DeliveryMethod dm, Set<String> sourceIds,
boolean enterprise) {
this.filter = filter;
this.dm = dm;
this.sourceIds = sourceIds;
this.enterprise = enterprise;
}
@Override
public boolean evaluate(Object object) {
return filter.evaluate(object);
}
@Override
public Object accept(FilterVisitor visitor, Object extraData) {
return filter.accept(visitor, extraData);
}
@Override
public Set<String> getSourceIds() {
return sourceIds;
}
@Override
public boolean isEnterprise() {
return enterprise;
}
@Override
public DeliveryMethod getDeliveryMethod() {
return dm;
}
}
Registering a Subscription
Once a Subscription is created, it needs to be registered in the OSGi Service Registry as a ddf.catalog.event.Subscription service. This is necessary for the Subscription
to be discovered by the Event Processor. Typically, this is done in code after the Subscription is instantiated. When the Subscription is registered, a unique ID will need to be specified using the key subscription-id. This will be used to delete the Subscription from the OSGi Service Registry. Furthermore, the ServiceRegistration, which is the return value from registering a Subscription, should be monitored in order to remove the Subscription later. The following code shows how to correctly register a Subscription implementation in the registry using the above SubscriptionImpl for clarity:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// Map to keep track of registered Subscriptions. Used for unregistering Subscriptions.
Map<String, ServiceRegistration<Subscription>> subscriptions = new HashMap<String, ServiceRegistration<Subscription>>();
// New subscription using the DDF Implementation of subscription
Subscription subscription = new SubscriptionImpl(filter, deliveryMethod, sourceIds,isEnterprise);
// Specify the subscription-id to uniquely identify the Subscription
String subscriptionId = "0123456789abcdef0123456789abcdef";
Dictionary<String, String> properties = new Hashtable<String, String>();
properties.put("subscription-id", subscriptionId);
// Service registration requires an instance of the OSGi bundle context
// Register subscription and keep track of the service registration
ServiceRegistration<Subscription> serviceRegistration = context.registerService(ddf.catalog.event.Subscription.class, subscription, properties );
subscriptions.put(subscriptionId, serviceRegistration);
Creating a Delivery Method
The Event Processor obtains the subscription’s DeliveryMethod and invokes one of its four methods when an event occurs. The DeliveryMethod then handles that invocation and communicates an event to a specified consumer service outside of DDF.
The Event Processor calls the DeliveryMethod’s`created method when a new metacard matching the filter criteria is added to the Catalog. It calls the deleted method when a metacard that matched the filter criteria is removed from the Catalog. updatedHit is called when a metacard is updated and the new metacard matches the subscription. updatedMiss is different in that it is only called if the old metacard matched the filter but the new metacard no longer does. An example of this would be if the filter contains spatial criteria consisting of Arizona. If a plane is flying over Arizona, the Event Processor will repeatedly call updatedHit as the plane flies from one side to the other while updating its position in the Catalog. This happens because the updated records continually match the specified criteria. If the plane crosses into New Mexico, the previous metacard will have matched the filter, but the new metacard will not. Thus, updatedMiss gets called.
The following is an implementation stub for DeliveryMethod:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
public class DeliveryMethodImpl implements DeliveryMethod {
@Override
public void created(Metacard newMetacard) {
// Perform custom code on create
}
@Override
public void updatedHit(Metacard newMetacard, Metacard oldMetacard) {
// Perform custom code on update (where both new and old metacards matched filter)
}
@Override
public void updatedMiss(Metacard newMetacard, Metacard oldMetacard) {
// Perform custom code on update (where one of the two metacards did not match the filter)
}
@Override
public void deleted(Metacard oldMetacard) {
// Perform custom code on delete
}
}
Deleting a Subscription
To remove a subscription from DDF, the subscription ID is required. Once this is provided, the ServiceRegistration for the indicated Subscription should be obtained from the Subscriptions Map. Then the Subscription can be removed by unregistering the service. The following code demonstrates how this is done:
1
2
3
4
5
6
7
8
9
10
String subscriptionId = "0123456789abcdef0123456789abcdef";
//Obtain service registration from subscriptions Map based on subscription ID
ServiceRegistration<Subscription> sr = subscriptions.get(subscriptionId);
//Unregister Subscription from OSGi Service Registry
sr.unregister();
//Remove Subscription from Map keeping track of registered Subscriptions.
subscriptions.remove(subscriptionId);
Extending Resource Components
Resource components are used when working with resources, i.e., the data that is represented by the cataloged metadata.
A resource is a URI-addressable entity that is represented by a metacard. Resources may also be known as products or data.
Resources may exist either locally or on a remote data store.
Examples of resources include:
-
NITF image
-
MPEG video
-
Live video stream
-
Audio recording
-
Document
A resource object in DDF contains an InputStream with the binary data of the resource. It describes that resource with a name, which could be a file name, URI, or another identifier. It also contains a mime type or content type that a client can use to interpret the binary data.
Resource Readers
A resource reader retrieves resources associated with metacards via URIs. Each resource reader must know how to interpret the resource’s URI and how to interact with the data store to retrieve the resource.
There can be multiple resource readers in a Catalog instance. The Catalog Framework selects the appropriate resource reader based on the scheme of the resource’s URI.
In order to make a resource reader available to the Catalog Framework, it must be exported to the OSGi Service Registry as a ddf.catalog.resource.ResourceReader.
URL Resource Reader
The URLResourceReader is an implementation of ResourceReader which is included in the DDF Catalog. It obtains a resource given an http, https, or file-based URL. The URLResourceReader will connect to the provided Resource URL and read the resource’s bytes into an InputStream.
|
When a resource linked using a file-based URL is in the product cache, the |
Installing and Uninstalling
URLResourceReader is installed by default with the DDF Catalog.
Configuring
Configurable Properties
URL Resource Reader
| Property | Type | Description | Default Value | Required |
|---|---|---|---|---|
rootResourceDirectories |
String array |
Specifies the only directories the |
<ddf.home>/data/products |
yes |
Using
URLResourceReader will be used by the Catalog Framework to obtain a resource whose metacard is cataloged in the local data store. This particular ResourceReader will be chosen by the CatalogFramework if the requested resource’s URL has a protocol of http, https, or file.
For example, requesting a resource with the following URL will make the Catalog Framework invoke the
URLResourceReader
to retrieve the
product.
file:///home/users/ddf_user/data/example.txt
If a resource was requested with the URL udp://123.45.67.89:80/SampleResourceStream, the
URLResourceReader would not be invoked.
Implementation Details
Supported Schemes:
-
http
-
https
-
file
|
If a file-based URL is passed to the |
Known Issues
None
Developing a Resource Reader
A ResourceReader is a class that retrieves a resource or product from a native/external source and returns it to DDF. A simple example is that of a File ResourceReader. It takes a file from the local file system and passes it back to DDF. New mplementations can be created in order to support obtaining Resources from various Resource data stores.
Create a New ResourceReader
Complete the following procedure to create a ResourceReader.
-
Create a Java class that implements the
ddf.catalog.resource.ResourceReaderinterface. -
Deploy the OSGi bundled packaged service to the DDFrun-time. (Refer to the Working with OSGi - Bundles section.)
Implementing the ResourceReader Interface
1
public class TestResourceReader implements ddf.catalog.resource.ResourceReader
ResourceReader has a couple of key methods where most of the work is performed.
|
URI |
retrieveResource
1
public ResourceResponse retrieveResource( URI uri, Map<String, Serializable> arguments )throws IOException, ResourceNotFoundException, ResourceNotSupportedException;
This method is the main entry to the ResourceReader. It is used to retrieve a Resource and send it back to the caller (generally the CatalogFramework). Information needed to obtain the entry is contained in the URI reference. The URI Scheme will need to match a scheme specified in the getSupportedSchemes method. This is how the CatalogFramework determines which ResourceReader implementation to use. If there are multiple ResourceReaders supporting the same scheme, these ResourceReaders will be invoked iteratively. Invocation of the ResourceReaders stops once one of them returns a Resource.
Arguments are also passed in. These can be used by the ResourceReader to perform additional operations on the resource.
An example of how URLResourceReader (located in the source code at /trunk/ddf/catalog/resource/URLResourceReader.java) implements the getResource method. This ResourceReader simply reads a file from a URI.
|
The "Map<String, Serializable> arguments" parameter is passed in to support any options or additional information associated with retrieving the resource. |
Implement retrieveResource()
-
Define supported schemes (e.g., file, http, etc.).
-
Check if the incoming URI matches a supported scheme. If it does not, throw
ResourceNotSupportedException.
For example:
1
2
3
4
if ( !uri.getScheme().equals("http") )
{
throw new ResourceNotSupportedException("Unsupported scheme received, was expecting http")
}
-
Implement the business logic.
-
For example, the
URLResourceReaderwill obtain the resource through a connection:
1
2
3
4
5
6
7
URL url = uri.toURL();
URLConnection conn = url.openConnection();
String mimeType = conn.getContentType();
if ( mimeType == null ) {
mimeType = URLConnection.guessContentTypeFromName( url.getFile() );
}
InputStream is = conn.getInputStream();
|
The |
-
Return
ResourceinResourceResponse.
For example:
1
return ResourceResponseImpl( new ResourceImpl( new BufferedInputStream( is ), new MimeType( mimeType ), url.getFile() ) );
If the Resource cannot be found, throw a ResourceNotFoundException.
getSupportedSchemes
public Set<String> getSupportedSchemes();
This method lets the ResourceReader inform the CatalogFramework about the type of URI scheme that it accepts and should be passed. For single-use ResourceReaders (like a URLResourceReader), there may be only one scheme that it can accept while others may understand more than one. A ResourceReader must, at minimum, accept one qualifier. As mentioned before, this method is used by the CatalogFramework to determine which ResourceReader to invoke.
|
ResourceReader extends Describable |
Export to OSGi Service Registry
In order for the ResourceReader to be used by the CatalogFramework, it should be exported to the OSGi Service Registry as a ddf.catalog.resource.ResourceReader.
See the XML below for an example:
1
2
<bean id="[[customResourceReaderId]]" class="[[example.resource.reader.impl.CustomResourceReader]]" />
<service ref="[[customResourceReaderId]]" interface="ddf.catalog.source.ResourceReader" />
Resource Writers
A resource writer stores a resource and produces a URI that can be used to retrieve the resource at a later time. The resource URI uniquely locates
and identifies the resource. Resource writers can interact with an underlying data store and store the resource in the proper place. Each
implementation can do this differently, providing flexibility in the data stores used to persist the resources.
Examples
The Catalog reference implementation currently does not include any resource writers out of the box.
Developing a Resource Writer
|
Before implementing a Resource Writer, refer to the Content Framework for alternatives. |
A ResourceWriter is an object used to store or delete a Resource.
ResourceWriter objects should be registered within the OSGi Service Registry, so clients can retrieve an instance when clients need to store a Resource.
Create a New ResourceWriter
Complete the following procedure to create a ResourceWriter.
-
Create a Java class that implements the
ddf.catalog.resource.ResourceWriterinterface.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
import java.io.IOException;
import java.net.URI;
import java.util.Map;
import ddf.catalog.resource.Resource;
import ddf.catalog.resource.ResourceNotFoundException;
import ddf.catalog.resource.ResourceNotSupportedException;
import ddf.catalog.resource.ResourceWriter;
public class SampleResourceWriter implements ResourceWriter {
@Override
public void deleteResource(URI uri, Map<String, Object> arguments) throws ResourceNotFoundException, IOException {
// WRITE IMPLEMENTATION
}
@Override
public URI storeResource(Resource resource, Map<String, Object> arguments)throws ResourceNotSupportedException, IOException {
// WRITE IMPLEMENTATION
return null;
}
@Override
public URI storeResource(Resource resource, String id, Map<String, Object> arguments) throws ResourceNotSupportedException, IOException {
// WRITE IMPLEMENTATION
return null;
}
}
-
Register the implementation as a Service in the OSGi Service Registry.
1
2
3
...
<service ref="[[ResourceWriterReference]]" interface="ddf.catalog.resource.ResourceWriter" />
...
-
Deploy the OSGi bundled packaged service to the DDF run-time (Refer to the Working with OSGi - Bundles section.)
|
ResourceWriter Javadoc |
Developing a Registry Client
Registry Clients create Federated Sources using the OSGi Configuration Admin. Developers should reference an individual Source’s (Federated, Connected, or Catalog Provider) documentation for the Configuration properties (such as a Factory PID, addresses, intervals, etc) necessary to establish that `Source in the framework.
Example
1
2
3
4
5
org.osgi.service.cm.ConfigurationAdmin configurationAdmin = getConfigurationAdmin() ;
org.osgi.service.cm.Configuration currentConfiguration = configurationAdmin.createFactoryConfiguration(getFactoryPid(), null);
Dictionary properties = new Dictionary() ;
properties.put(QUERY_ADDRESS_PROPERTY,queryAddress);
currentConfiguration.update( properties );
Note that the QUERY_ADDRESS_PROPERTY is specific to this Configuration and might not be required for every Source. The properties necessary for creating a Configuration are different for every Source.
Working with Resources
Metacards and Resources
Metacards are used to describe a resource through metadata. This metadata includes the time the resource was created, the location where the resource was created, etc. A DDF Metacard contains the getResourceUri method, which is used to locate and retrieve its corresponding resource.
Retrieve Resource
When a client attempts to retrieve a resource, it must provide a metacard ID or URI corresponding to a unique resource. As mentioned above, the resource URI is obtained from a Metacard’s `getResourceUri method. The CatalogFramework has three methods that can be used by clients to obtain a resource: getEnterpriseResource, getResource, and getLocalResource. The getEnterpriseResource method invokes the retrieveResource method on a local ResourceReader as well as all the Federated and Connected Sources inthe DDF enterprise. The second method, getResource, takes in a source ID as a parameter and only invokes retrieveResource on the specified Source. The third method invokes retrieveResource on a local ResourceReader.
The parameter for each of these methods in the CatalogFramework is a ResourceRequest. DDF includes two implementations of ResourceRequest: ResourceRequestById and ResourceRequestByProductUri. Since these implementations extend OperationImpl, they can pass a Map of generic properties through the CatalogFramework to customize how the resource request is carried out. One example of this is explained in the Options section below. The following is a basic example of how to create a ResourceRequest and invoke the CatalogFramework resource retrieval methods to process the request.
1
2
3
4
5
6
7
Map<String, Serializable> properties = new HashMap<String, Serializable>();
properties.put("PropertyKey1", "propertyA"); //properties to customize Resource retrieval
ResourceRequestById resourceRequest = new ResourceRequestById("0123456789abcdef0123456789abcdef", properties); //object containing ID of Resource to be retrieved
String sourceName = "LOCAL_SOURCE"; //the Source ID or name of the local Catalog or a Federated Source
ResourceResponse resourceResponse; //object containing the retrieved Resource and the request that was made to get it.
resourceResponse = catalogFramework.getResource(resourceRequest, sourceName); //Source-based retrieve Resource request
Resource resource = resourceResponse.getResource(); //actual Resource object containing InputStream, mime type, and Resource name
ddf.catalog.resource.ResourceReader instances can be discovered via the OSGi Service Registry. The system can contain multiple ResourceReaders. The CatalogFramework determines which one to call based on the scheme of the resource’s URI and what schemes the ResourceReader supports. The supported schemes are obtained by a ResourceReader’s `getSupportedSchemes method. As an example, one ResourceReader may know how to handle file-based URIs with the scheme file, whereas another ResourceReader may support HTTP-based URIs with the scheme http.
The ResourceReader or Source is responsible for locating the resource, reading its bytes, adding the binary data to a Resource implementation, then returning that Resource in a ResourceResponse. The ResourceReader or Source is also responsible for determining the Resource’s name and mime type, which it sends back in the `Resource implementation.
See the Developing a Resource Reader section or the Developing a Source section in the Developer’s Guide for more information and examples.
Options
Options can be specified on a retrieve resource request made through any of the supporting endpoint. To specify an option for a retrieve resource request, the endpoint needs to first instantiate a ResourceRequestByProductUri or a ResourceRequestById. Both of these ResourceRequest implementations allow a Map of properties to be specified. Put the specified option into the Map under the key RESOURCE_OPTION.
1
2
3
Map<String, Serializable> properties = new HashMap<String, Serializable>();
properties.put("RESOURCE_OPTION", "OptionA");
ResourceRequestById resourceRequest = new ResourceRequestById("0123456789abcdef0123456789abcdef", properties);
Depending on the support that the ResourceReader or Source provides for options, the properties``Map will be checked for the RESOURCE_OPTION entry. If that entry is found, the option will be handled; however, the ResourceReader or Source supports options. If the ResourceReader or Source does not support options, that entry will be ignored.
A new ResourceReader or Source implementation can be created to support options in a way that is most appropriate. Since the option is passed through the catalog framework as a property, the ResourceReader or Source will have access to that option as long as the endpoint supports options.
Store Resource
Resources are saved using a ResourceWriter. ddf.catalog.resource.ResourceWriter instances can be discovered via the OSGi Service Registry. Once retrieved, the ResourceWriter instance provides clients a way to store resources and get a corresponding URI that can be used to subsequently retrieve the resource via a
ResourceReader. Simply invoke either of the storeResource methods with a resource and any potential arguments.
The ResourceWriter implementation is responsible for determining where the resource is saved and how it is saved. This allows flexibility for a resource to be saved in any one of a variety of data stores or file systems. The following is an example of how to use a generic implementation of ResourceWriter.
1
2
3
4
5
6
7
8
InputStream inputStream = <Video_Input_Stream>; //InputStream of raw Resource data
MimeType mimeType = new MimeType("video/mpeg"); //Mime Type or content type of Resource
String name = "Facility_Video"; //Descriptive Resource name
Resource resource = new ResourceImpl(inputStream, mimeType, name);
Map<String, Object> optionalArguments = new HashMap<String, Object>();
ResourceWriter writer = new ResourceWriterImpl();
URI resourceUri; //URI that can be used to retrieve Resource
resourceUri = writer.storeResource(resource, optionalArguments); //Null can be passed in here
See the Developing a Resource Writer section in the Developer's Guide for more information and examples.
BinaryContent
BinaryContent is an object used as a container to store translated or transformed DDF components. Resource extends BinaryContent andincludes a getName method. BinaryContent has methods to get the InputStream, byte array, MIME type, and size of the represented binary data. An implementation of BinaryContent (BinaryContentImpl) can be found in the Catalog API in the ddf.catalog.data package.
Additional Information
-
URI on Wikipedia (http://en.wikipedia.org/wiki/Uniform_resource_identifier)
-
URI Javadoc (http://docs.oracle.com/javase/6/docs/api/java/net/URI.html)
Developing Catalog Components
Store Resource
This section describes how to create Catalog components. Use in conjunction with the Javadoc to begin extending the DDF Catalog.
Overview
The DDF Content application provides a framework for storing, reading, processing, transforming and cataloging data.
This guide supports developers creating extensions of the existing framework. There are currently no DDF Content development details beyond those covered in the DDF Developer’s Guide.
Whitelist
The following packages have been exported by the DDF Content Application and are approved for use by third parties:
-
ddf.content
-
ddf.content.data
-
ddf.content.operation
-
ddf.content.plugin
-
ddf.content.storage
-
ddf.content.util
-
ddf.content.core.directorymonitor
Overview
This page supports developers creating extensions of the existing framework.
Whitelist
The following packages have been exported by the DDF Platform application and are approved for use by third parties:
-
ddf.action
-
ddf.action.impl
-
ddf.mime
-
ddf.security
-
ddf.security.assertion
-
ddf.security.common.audit
-
ddf.security.permission
-
ddf.security.service
-
ddf.security.ws.policy
-
ddf.security.ws.proxy
-
ddf.security.encryption
-
org.codice.ddf.configuration
-
org.codice.ddf.platform.status
| The Platform Application includes other third party packages such as Apache CXF and Apache Camel. These are available for use by third party developers but their versions can change at any time with future releases of the Platform Application. |
Developing Action Components (Action Framework)
The Action Framework was designed as a way to limit dependencies between applications (apps) in a system. For instance, a feature in an app, such as a Atom feed generator, might want to include an external link as part of its feed’s entries. That feature does not have to be coupled to a REST endpoint to work, nor does it have to depend on a specific implementation to get a link. In reality, the feature does not identify how the link is generated, but it does identify whether link works or does not work when retrievi ng the intended entry’s metadata. Instead of creating its own mechanism or adding an unrelated feature, it could use the Action Framework to query out in the OSGi container for any service that can provide a link. This does two things: it allows the feature to be independent of implementations, and it encourages reuse of common services.
The Action Framework consists of two major Java interfaces in its API:
-
ddf.action.Action -
ddf.action.ActionProvider
Usage
To provide a service, such as a link to a record, the ActionProvider interface should be implemented. An ActionProvider essentially provides an Action when given input that it can recognize and handle. For instance, if a REST endpoint ActionProvider was given a metacard, it could provide a link based on the metacard’s ID. An Action Provider performs an action when given a subject that it understands. If it does not understand the subject or does not know how to handle the given input, it will return null. An Action Provider is required to have an ActionProvider id. The Action Provider must register itself in the OSGi Service Registry with the ddf.action.ActionProvider interface and must also have a service property value for id. An action is a URL that, when invoked, provides a resource or executes intended business logic.
Naming Convention
For each Action, a title and description should be provided to describe what the action does. The recommended naming convention is to use the verb 'Get' when retrieving a portion of the metacard, such as the metadata or thumbnail, or when you are downloading the product. The verb 'Export' or expression 'Export as' is recommended when the metacard is bring exported in a different format or presented after going some transformation.
Taxonomy
An Action Provider registers an id as a service property in the OGSi Service Registry based on the type of service or action that is provided. Regardless of implementation, if more than one Action Provider provides the same service, such as providing a URL to a thumbnail for a given metacard, they must both register under the same id. Therefore, Action Provider implementers must follow an Action Taxonomy.
The following is a sample taxonomy:
-
catalog.data.metacard shall be the grouping that represents Actions on a Catalog metacard.
-
catalog.data.metacard.view
-
catalog.data.metacard.thumbnail
-
catalog.data.metacard.html
-
catalog.data.metacard.resource
-
catalog.data.metacard.metadata
-
Action ID Service Descriptions
| ID | Required Action | Naming Convention |
|---|---|---|
catalog.data.metacard.view |
Provides a valid URL to view all of a metacard data. Format of data is not specified; i.e. the representation can be in XML, JSON, or other. |
Export as … |
catalog.data.metacard.thumbnail |
Provides a valid URL to the bytes of a thumbnail (Metacard.THUMBNAIL) with MIME type image/jpeg. |
Get Thumbnail |
catalog.data.metacard.html |
Provides a valid URL that, when invoked, provides an HTML representation of the metacard. |
Export as … |
catalog.data.metacard.resource |
Provides a valid URL that, when invoked, provides the underlying resource of the metacard. |
Get Resource |
catalog.data.metacard.metadata |
Provides a valid URL to the XML metadata in the metacard (Metacard.METADATA). |
Get Metadata |
Overview
The Security application provides authentication, authorization, and auditing services for the DDF. They comprise both a framework that developers and integrators can extend and a reference implementation that meets security requirements. More information about the security framework and how everything works as a single security solution can be found on the Managing Web Service Security page.
This guide supports developers creating extensions of the existing framework.
Developing Token Validators
Token validators are used by the Security Token Service (STS) to validate incoming token requests. The TokenValidator CXF interface must be implemented by any custom token validator class. The canHandleToken and validateToken methods must be overridden. The canHandleToken method should return true or false based on the ValueType value of the token that the validator is associated with. The validator may be able to handle any number of different tokens that you specify. The validateToken method returns a TokenValidatorResponse object that contains the Principal of the identity being validated and also validates the ReceivedToken object that was collected from the RST (RequestSecurityToken) message.
Usage
At the moment, validators must be added to the cxf-sts.xml blueprint file manually. There is a section labeled as the "Delegate Configuration" in that file. That is where the validator must be added along with the existing validators. In the future, we expect this to be pluggable without changes to the blueprint file.
|
The validator services that are currently bundled on their own are webSSOTokenValidator, SamlTokenValidator, and x509TokenValidator. Each has its own blueprint.xml file that defines and exports the service. They can be individually turned on/off as a bundle/service. |
Overview
The DDF Spatial Application provides KML transformer and a KML network link endpoint that allows a user to generate a View-based KML Query Results Network Link.
This page supports developers creating extensions of the existing framework.
Overview
The DDF Standard Search UI application allows a user to search for records in the local Catalog (provider) and federated sources. Results of the search are returned in HTML format and are displayed on a globe, providing a visual representation of where the records were found.
This page supports developers creating extensions of the existing framework.